Ninety-nine Lisp Problems

As Lemonodor says: 99 Problems But malloc Ain't One of Them.

Anyway, the list is based on on a list of Prolog challenges, but the solutions (when they exist) are in Lisp.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

99 Haskell problems

It would be interesting to see types of solutions to some of these problems, say in Haskell.
Not implying they are difficult to define, but still there are different possibilities (e.g., for P07).

PS: and if one wants to be completely protected from "going wrong", the type for even P01 is far from trivial.

Already been done.

Graphs need work

Problems 80-89 are open.

For additional points

Now an advanced exercise would be to rewrite the solutions so they are strongly typed :) I guess it's been a long time since I complained about type-errors in Prelude functions, so I will repeat myself. Take, for example, a description of last:
Extract the last element of a list, which must be finite and non-empty.
Would you guess these constraints when you look at the type:last :: [a] -> a? So where does last go when it is passed either infinite or empty list (certainly it cannot go wrong, maybe awry?)? One may say I really wanted to see these solutions in Epigram (but see this and especially this).

So where does last go

So where does last go when it is passed either infinite or empty list

It materializes on Milner's doorstep, looking confused and wanting to know what it's supposed to do now.

For really additional points...

you must be someone other than Oleg.

Now to be really pedantic there's no type errors in the Prelude, it's just that there exist polymorphic values other than bottom in Haskell. It's like in Java where every piece of code can throw any instanceof RuntimeException. The problem here lays on partial functions or hidden effects, because error is as much an effect as is hPutStr stderr "Ops!" >> exitWith (ExitFailure -1). Actually I would consider even non-termination to be an effect, because how could we guess whether main :: IO Int terminates or not. Where are my coalgebras and why do I have to fake them with algebras?

Your final coalgebras are

Your final coalgebras are your initial algebras, that may be part of the problem?

Kind of

Actually the main problem I see is that if lists behave also as streams we can have non-terminating computations anywhere and induction over ADT may never end. I would prefer to keep these two concepts distinct so that we don't need to use fast and loose reasoning to prove things and because termination is good.

As a side note since I read "Why functional programming matters?" I wonder if the benefits from lazyness couldn't come from a strict language with proper codata. IMHO lazyness keeps you honest on effects but not on non-termination.

Total functional programming

Food for thought about these issues of termination, data/codata, recursion schemes and such can be found in the Turner 2004 paper, Total functional programming, where he argue about the necessity of removing the possibility of non-termination altogether, and give clues (and links to research) about how to conserve enough expressive power despite losing Turing-completeness.

What exactly is an effect?

Is there any kind of formal or even modestly precise definition for "effect?" Even the more specific term "side effect" has never seemed anything more than a matter of opinion to me. Is it a side effect if a computation generates heat? How is that any different from writing to an append-only log?

Calling nontermination an effect surprises me. I would classify nontermination as having more to do with inputs than with outputs, so it strikes me as more like impurity, if anything, than a side effect. But in fact I most view nontermination as being in a class by itself.

Normally the generation of

Normally the generation of heat is considered an effect not of the function but of the means you use to evaluate it.

I don't understand the point

I don't understand the point of the distinction you're drawing. My point was that what we consider an effect, and what we consider outside the universe of consideration, is a kind of abstraction, and what we leave out is design rather than math. The fact that we usually leave out heat generation but leave in appending to a log is a matter of convention, not theory. We could just as easily leave out the log output. The same is not true of purity, or nontermination.

All of which leads me to the conclusion that the term "effect" doesn't have a hard definition, but rather is more of an evocative term. Calling something an effect is saying we want to keep track of it, not saying anything about its nature or its effect on the rest of our computation.

If we are talking static type analysis

...then when we talk about effects what we really mean is that sequencing is introduced. Or to put it another way, static analysis does not relate to actually running a program in specific environment at a specific time. Programs are merely abstractions that can be analyzed.

The program as a model does not generate heat as in your example because it's not part of the program as abstraction - there's no way to predictably build it into the abstraction. Running a program as a sequence of instruction on a specific machine can have all sorts of effects that go well beyond the mathematical model (for all we know it may cause a big boom because it is attached to a guided missile).

Detectability

Heat generation could be an effect if you have a heat measuring function. So an effect is anything that could cause a function to return a different result with the same inputs.

This would mean that causing an effect is actually not a problem, only measuring the effect. However, usually you want to cause the effects in a certain order, so it will still heave to be treated special.

Normally...

Normally, but not always.

The beauty is in the eye of the beholder

Searching for "effect masking" and "type and effect systems" gives us lots of results on interesting research related to effects. The trick is, as you mention it, coming up with an abstraction that let us identify which effects are we interested in tracking.

My personal definition is that evaluating an expression should give me a value, any other resource usage is an effect. Space is one kind of resource, so we could track memory consumption during the evaluation and final versus initial memory usage. For such system see the Hughes and Pareto papers on sized region types. Time is another resource, so we could keep track of algorithimic complexity, termination and liveness. IO is another resource, non-determinism can be seem as an resource (i.e. oracle usage), control operators and exceptions are control effects (as opposed to world effects). In current state of art we can't have a system that can keep track of arbitrary effects, just of those predicted by the designer, but that doesn't mean they don't exist.

I agree with your later post that it's a matter of abstraction, but finding the correct abstraction is tricky and on my day job I keep seeing situations where a program doesn't terminate because of an unverified assumption and I know that a compiler that keeps track of non-termination could spot that cases* in the same way system F catches subtle bugs in my code.

Actually it wouldn't catch these bugs, as it would restrict the programmer to idioms that wouldn't be problematic to verify. In the sized region types paper that's a great example of that.

Seems Only The Prolog Problems Are Done!

The 99 Prolog puzzles are complete, whereas the Perl, Lisp, and Haskell problem sets have big gaps! Is there a lesson here?

Also the Prolog code seems shorter and clearer. Perhaps because the problems were chosen for Prolog?

The others are based on the Prolog problems.

The Lisp ones were started in order to compare them to the Prolog ones. The Perl ones were started just recently for comparison to both Lisp and Prolog. I imagine the Haskell problems were done similarly -- started later, and done for comparison.

When one work is started in reaction to another which is already finished, it's natural that it is not finished as soon.

Prolog comparisons

Also the Prolog code seems shorter and clearer.

I am not so sure about that. Take a look at p50, the Huffman code problem. The Prolog solution is long and pretty gruesome. As well, there are an awful lot of cuts sprinkled through some of the solutions and those are part of what keeps me bouncing off Prolog.

You can easily get rid of the cuts

You can easily get rid of the cuts in that solution with if-then-else, first-argument indexing etc. For example, here's a version of the solution's insert/3 predicate without cut:

insert([], N, [N]).
insert([n(F0,Y)|Ns], n(F,X), Is) :-
    (   F < F0 ->
        Is = [n(F,X),n(F0,Y)|Ns]
    ;   Is = [n(F0,Y)|Rest],
        insert(Ns, n(F,X), Rest)
    ).