(Even more) theorems for free?

While talking about Haskell in this post, I believe the question is more general, and therefore on-topic on LtU.

I use Haskell rarely, so every time I do use it, I catch myself reimplementing standard functions, only because I do not know them by name, or in which module they are.
Sometimes I am not sure, whether some function I need is really standard, or can be obtained by composing several standard functions with a possible minor inclusion of custom code. E.g., should I implement a function with the following type from scratch:

Ord t => [(t, a)] -> [(t, b)] -> [(t, Either a b)]

Could there be a tool that helps me out by suggesting possible implementations?

Note that I intentionally do not describe the meaning of the function, but only its type - the idea is to either imitate "theorems for free" and deduce the only natural implementation, or to find existing functions with matching type, or even better a combination of both.

Can something like that be implemented as mere (IDE) tool on top of existing Haskell type system, or it is not rich enough and I have to use Epigram?

[On edit: I should probably rephrase the question: how far can this tool go using Haskell type system? It is obvious that some degree of deducing is possible, and then again some other is not. E.g., I am not sure it's reasonable to expect the tool to suggest using an existing function with type (x -> y -> Ordering) -> [x] -> [y] -> [Either x y] with additional code to witness isomorphism(?) of these types.

BTW, as soon as I typed the word "isomorphism" I remembered that I probably should look at Frank Atanassow papers...]

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Djinn

On the #haskell channel is the lambdabot, which has a theorem-prover/function-finder called djinn. You supply the type, and it finds an implementation that fits. See previously on LtU.

Not using libraries

Thanks for the pointer!

One issues I have with Djinn is it giving you the implementation from scratch, not reusing existing functions.

I mean, instead of getting f x = x I would prefer getting f = id.

Also, ignoring libraries means it knows nothing about lists or orderings, so my type from the original post cannot be solved.

Trying Djinn anyway

You can teach Djinn about Ordering. And let's use Maybe instead of lists just to see what happens. Oh, and the context will have to be replaced by a function.

Djinn> data Ordering = LT | EQ | GT
Djinn> h ? (t->t->Ordering) -> Maybe (t,a) -> Maybe (t,b) -> Maybe (t, Either a b)
h :: (t -> t -> Ordering) -> Maybe (t, a) -> Maybe (t, b) -> Maybe (t, Either a b)
h a b c =
    case b of
    Nothing -> Nothing
    Just (d, e) -> case a d d of
                   LT -> Nothing
                   EQ -> Nothing
                   GT -> case c of
                         Nothing -> Just (d, Left e)
                         Just (f, g) -> Just (f, Right g)

Hmmm, doesn't really look anything like what you had in mind, does it? :)

Inferring type isomorphisms generically, but not uniquelly

I guess what I meant should use something like Inferring Type Isomorphisms Generically, but dropping the uniqueness constraint, thus allowing the user to pick the witness he wants.

E.g., picking either Left . fst or Right . snd from a list of witnesses for (Bool, Int) -> Either Bool Int.
Note how I prefer to see fst instead of \(x,_) -> x.

Hoogle

Maybe something similar to Hoogle is what you're after.

Weakening requirements

I wonder, whether providing a yes/no type inhabitation answer would be easier than finding witnesses.
For example, I have the following type lying (pun unintended) before me:

((a -> a) -> (b -> b)) -> ((a -> Maybe a) -> (b -> Maybe b))

If I knew it is inhabited, I would try and implement it with more energy than if I suspected it might be uninhabited.

PS: certain natural constraints are implied, so, e.g., always returning Nothing is a bad solution.
PPS: an even simpler example would be (a -> (a -> r) -> r) -> (a -> a). Can a function in Haskell have such a type? How can we tell?

How can we tell?


((a -> a) -> (b -> b)) -> ((a -> Maybe a) -> (b -> Maybe b))

Is a function that takes 3 arguments, the last of type b, and returns a Maybe b. But we already have return :: b -> Maybe b. So the solution is simply const (const return) which ignores its first two arguments.

So I'd suggest the first step, before using any theorem prover, is to erase extra parentheses by hand so it's easier to parse by eye.

Unused arguments

Well, I mentioned that Nothing is bad solution, but I should have added that Just wrapping the third argument is not good as well.
I would like to use all arguments.

Used arguments

That's why the solution with all arguments used is listed first. :)

Continuations

Well, but it is still not what I intended.
For one, it can never return Nothing.

I have a solution in mind, which would work in impure language (requires at least exceptions). So I guess what I need is to CPS-transform some of the function types involved, and then see if Djinn can find what I meant.

BTW, the intent of the function was given a container update function (which is similar to map, but is not as natural) and a possibly failing element transformation to obtain a possibly failing container transformation.

Nothing

It might not be what you have in mind, but it never returns Nothing.
There's a Just constructor on the outside.

Something needed

Yes, the only way to get Nothing is constant Nothing variant.

I changed the type to ((a -> c) -> (b -> d)) -> ((a -> Maybe c) -> (b -> Maybe d)) so there are less possibilities to cheat, and the only inhabitant found is constant Nothing.

The problem (or feature) is lack of speculative execution: the code must early commit to one of the choices - either to return Nothing or to call container update (and risk that element update fails, at which point it is too late to return Nothing). The tantalizing part of this is that impure implementation is obvious.

You have me intrigued

Occasionally, as a mathematician, I want to prove result A using result B even though I already have another proof not using B. I guess that's exactly what you're asking for. Thinking purely in terms of logic I'm not 100% sure it's a well-defined notion but thinking computationally we have a pretty good idea of what it means for a function to depend on an argument. So your question gives a way to make rigorous sense of the notion of a proof that must depend on a premise. Interesting.

It's a perfectly

It's a perfectly well-defined notion in logic -- it's a relevant logic. In a relevant logic, a premise in a proof must be used in the derivation. (Hence the name -- the premises must be relevant.) This amounts to rejecting the axiom of weakening -- i.e., rejecting the axiom that A -> (B -> A).

If you also drop the principle of contraction, then you get linear logic.

Yes...but that's a different logic...

A mathematician, speaking informally, is happy to say that one proof depends on another when using classical logic and when their proofs are full of weakening steps. Similarly, the original question doesn't ban us from using weakening in general, just certain specific uses of it. Is it straightforward to define a logic that keeps weakening in general, but makes sense of the idea of using certain specific premise? For example could we use a slightly restricted axiom schema of weakening and get meaningful results? Just banning the use of weakening for the required premises themselves wouldn't do, for example in the classical case banning p->(Q->p) for a specific Q would still leave us free to use p->(~~Q->p) which is essentially the same thing.

Linear logic

Yes, it makes sense. Look at the ? and ! connectives in linear logic.
You just need to mark the things that can be weakened (or not weakened).

Yes, this makes sense. It's

Yes, this makes sense. It's pretty easy to cook up a logic that does the job. I'll use natural deduction because hey, that's what I know best.

The basic idea is that the usual presentation of logic looks more or less like this:

A,B ::= P | A -> B
Gamma ::= nil | Gamma, A true

P are the atomic formulas, and we'll only look at implication as our only connective. A derivation Gamma |- A true is a proof that the formula A is true in some context of hypotheses, denoted by Gamma. In natural deduction style, we have a hypothesis rule and an intro and elim rule for implication:

Gamma |- B true
----------------------- (weak)
Gamma, A true |- B true


Gamma, A true, A true |- B true 
------------------------------- (contract)
Gamma, A true |- B true


-----------------------  (hyp)
A true |- A true


Gamma, A true |- B true
----------------------- (-> Intro)
Gamma |- A -> B true


Gamma |- A -> B true   Gamma |- A true
-------------------------------------- (-> Elim)
Gamma |- B true

Relevant logic changes this by throwing out the weakening rule. So what happens if we want to allow some hypotheses to use weakening, and some that don't? Well, the easy thing to do is just to add a new context of strictly used hypotheses! Then we can change our judgement from Gamma |- A true to Gamma; Delta |- A true. This means that A is true, under the assumptions in Gamma, and the assumptions in Delta, and furthermore every assumption in Delta is actually used at least once.

A, B ::= P | A -> B
Gamma ::= nil | Gamma, A true
Delta ::= nil | Delta, A strict

-----------------------  (Strict hyp)
nil; A strict |- A true


--------------------- (Unrestricted hyp)
A true; nil |- A true


Gamma; Delta |- B true
------------------------------  (weak)
Gamma, A true; Delta |- B true


Gamma, A true, A true; Delta |- B true
-------------------------------------- (contract)
Gamma, A true; Delta |- B true


Gamma; Delta, A strict, A strict |- B true
------------------------------------------  (strict contract)
Gamma; Delta, A strict |- B true


Gamma, A true; Delta |- B true
------------------------------ (-> Intro)
Gamma; Delta |- A -> B true


Gamma; Delta |- A true   Gamma; Delta' |- A -> B true
----------------------------------------------------- (-> Elim)
Gamma; Delta, Delta' |- B true


Gamma; Delta |- A true    Gamma; Delta', A |- B true
---------------------------------------------------- (strict cut)
Gamma; Delta, Delta' |- B true

The key idea that makes this work is that the rules with multiple premises divide the strict hypotheses between them, so that you know that each branch of the proof guarantees using some of the strict hypotheses. (You can use contraction to replicate a hypothesis, if both branches turn out to need it.)

If you internalized the notion of relevance into intuitionistic logic as a new modality, I think that it would turn out to be box. (Maybe even S4's box?) Then you could give it a categorical semantics as a comonad over a CCC. But I'm pretty shaky on semantics in general, and categorical semantics of modal logic is starting to get out of my depth. So take this last paragraph with a grain of salt.

So all we need now...

...is a theorem prover for your logic...

Theorem proving

So these questions can certainly be answered by Djinn:

Djinn> :set +m
Djinn> f ? ((a -> a) -> (b -> b)) -> ((a -> Maybe a) -> (b -> Maybe b))
f :: ((a -> a) -> b -> b) -> (a -> Maybe a) -> b -> Maybe b
f a b c =
    Just (a (\ d ->
             case b d of
             Nothing -> d
             Just e -> e) c)
-- or
f a _ b = Just (a (\ c -> c) b)
-- or
f _ _ a = Just a
-- or
f _ _ _ = Nothing

Djinn> g ? (a -> (a -> r) -> r) -> (a -> a)
g :: (a -> (a -> r) -> r) -> a -> a
g _ a = a

Even though Djinn does not necessarily give you all possible implementations the fact that you only get one answer for the second query is a good hint that it's the only one.

Djinn has simple heuristic so that it tries to use as many variables as possible to avoid trivial solutions. So the implementation of f is one that might do what you want.

Automatic generation of free theorems

Aha, something interesting: Automatic generation of free theorems.

Try entering [a] -> [a], and compare the result to the one given in "Theorems for free!". Looks similar!