Iteration function in lambda calculus

I have a function like this

iter :: Int -> (a -> a) -> a -> a
iter n f a = f (f ... (f a) .. )

how can i define such function in un-typed lambda calculus ?

any hint/help will be appreciated.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Off Topic

This question isn't on topic for LtU, and sounds like a homework problem to boot.

No to mention that it is

No to mention that it is trivial too. Maybe you should try the haskell-cafe mailing list.

No fun

That's not the fun way to answer such a question. Here's the fun way:

iter = id

Thanks

Thanks for your reply, i figured it out
any way probably i should not have ask such questions in this forum ,is there option to delete thread i could not get the link

Fun may not be Good

Making fun of people is not always a desirable behavior in a community. The people asking inappropriate question on LtU deserve a respectful answer.

That said, no harm meant, we all have our schadenfreude moments. I usually vent my steam on reddit, where nobody cares.

Fun vs fun

Making fun of people is not always a desirable behavior in a community. The people asking inappropriate question on LtU deserve a respectful answer.

I agree, and I did not mean to make fun, only to have fun. I submit immodestly that the brief answer is also not a bad one, in the sense that trying to understand why it is correct is likely to lead one to solve the question oneself.

Another question

In fact, though, that's only true if the inferior type, (a -> a) -> a -> a, is chosen for simply-typed Church numerals. The superior type is a -> (a -> a) -> a, which requires a trickier answer to the question.

Now here's another question: why do I call the latter type superior?

I don't know!

The types are iso, so I can't tell. Post an answer in a day or two?

Will do

As a hint: category theory tends to make structures super nice by fiat, here obliterating the messy distinction I am getting at.

We could say that one is a

We could say that one is a church-encoding of `Zero : n | Succ : n -> n`, while the other is `Succ : n -> n | Zero : n`. I still find it weird to declare one "superior" to the other (although I would also more naturally use `a -> (a -> a) -> a`).

Not what I'm after

I don't see why the ordering of operators should matter from an algebraic point of view either.

point-free composition

I think (a->a)->a->a is the superior type (in terms of notation). We have (a->a) -> (a->a) thus we can use f . g . h :: (a->a)->(a->a) where f,g,h are each of (a->a) -> (a->a).

I've recently worked quite a bit with arrows, and this form is for reactive signal transformers. (f . g . h) is more commonly (h >>> g >>> f) in Haskell arrow notations.

Yes, nice

The composition operator is useful in this case, at least as useful as anything to do with Church numerals actually can be: it is addition.

Multiplication

In this case the composition operator is multiplication.

I.e. (iter 3) . (iter 4) is an inner loop four times and outer loop three times. const id is the zero value

Right

I knew that, I think.

Functional equation

My immediate reply to this (which is decidedly not the point) was “Don't you mean iter 3 (iter 4), or, depending on terseness, iter 3 4 or even just 3 4?” I then realised that any two ‘iteration arrows’ g and h satisfy the functional equation (*) g . h = g h, which seems interesting in its own right. I don't know the “theorems for free” yoga, and can't find the online implementation I seem to remember—but, if it says that every arrow (†) (a -> a) -> a -> a is an ‘iteration arrow’ (my proof: I can't think of any others), then the functional equation (*) characterises the type (†)!

Along those lines, is there are any reason to think of const 0, rather than 0, as the 0 value?

EDIT: Oh dear. It is surely a rather dire error for a mathematician to confuse 0 = const id with 1 = id! Had I fixed that error, I would probably have noticed that identifying g . h with g h confuses multiplication, not with addition, but with exponentiation; so g . h = g h should be regarded as, not an identity, but rather an exponential Diophantine equation.

I don't think you need

I don't think you need parametricity / theorems for free in this case. You just ask "what are the normal terms of type (a -> a) -> a -> a in the empty environment?" (setting the empty environment rules out weird constants; that's what you would have by reasoning about polymorphism), and you can easily see by induction that they're of the form (fun f x -> f (f (... (f x)..))).

The problematic, but not inferior, Church encoding

I'm persuaded by David's notational point that, if you were seriously to use Church numerals, the (a -> a) -> a -> a encoding allows more use of point-free style, and so is not inferior.

The issue I had in mind is more philosophical. Prawitz's thesis says that the value of a proof is its normal form, and by extension so too the values of data types in typed lambda-calculi. There are issues with the thesis, but we can say a type is unproblematic when its semantics works cleanly in this way.

With the beta-eta theory, both encodings work nicely: the zero value is the term with two lambda abstractions where one of which is vacuous; each encoding has a successor function, which is the unique slowest-growing monotone endofunction over the Church numerals; each number value in either representation is either zero or the normal form from applying the successor function to a value. It is simply perfect.

Now turn to the beta theory, which is the theory for which Prawitz's thesis was first advanced. With a -> (a->a) -> a, these good properties remain. With the (a -> a) -> a -> a encoding, however, we have a value that is not zero and not the normal form obtained from applying the successor to any value. It is the identity function, its successor is two, and it is different from one.

So while the (a -> a) -> a -> a encoding has many desirable values in its rarely seen practice, it is problematic.

Your message is intriguing

Your message is intriguing is that it almost explain your point, but at the last moment denies us the counter-example that we are expecting. If that was thought as an incentive to re-read deeper and come with the counter-example ourselves, it worked for me: you're speaking of the difference between λf.f and λfλx.fx, right?

Now that I read your answer, I remember having read something similar a few years ago in, I think,"Proof and Types" of Girard et al. I suppose this is folklore for the more theoretical-minded CS people. In the "practical" functional programming community, we kind of always take eta for granted (except when it isn't, eg. side effects and the value restriction).

Counterexample accessed

Yes and yes, that is the counterexample and the point is made in Proofs & Types: while you can add eta rules to System F, they won't come for free with the Martin-Loef -style data types, and so you get the problem above.

A further point is that sometimes you can't have eta rules: there's no reasonable eta rule for inductive data types in Martin-Loef's type theory, for instance.

Sorry if this gets a bit

Sorry if this gets a bit off-topic, but I'm interested in looking a bit further here. From my outside high-level point of view, it seems that the issue is not with inductive data types per se, but more generally with all sum types. I had the feeling it was probably related to the general unease about additive connectors in linear logic (for proof nets for example). So I'd rather say it's not about eta or inductives, but rather some deeper aspect of sums that we haven't internalized yet.

Pseudo eta

If an eta expansion consists of taking a formula in a derivation and putting in a subderivation that breaks apart the connective with elimination rules and then joins them back together with introduction rules to get the original formula, then both disjunction and the natural numbers type do have "eta rules".

For example, the fold operator for the natural numbers matches the elimination rule for the natural numbers type; in the propositional case this has type:

foldnat: a -> (a -> a) -> Nat -> a

Then you can have the eta expansion:

n ==> foldnat Zero (lambda x -> Succ x) n

But such eta rewrites are not much use, since the introduction rule can't be last for either of them. For instance, this means that the introduction and elimination rules cannot be commuted, and so introduce something of a snarl in the derivation. There isn't much of a sense in which the eta expansion could be said to analyse the structure of a formula, in the way that the eta expansions for conjunction, implication and the dependently typed quantifiers do. So I call them pseudo etas.

Linear logic is a special case: a lot of the "problems" encountered there come from Girardian perfectionism: proof theories for the logic are expected to have superb properties that we don't demand of classical and intuitionistic logic. Finding eta equivalences for linear logic tends to be a problem when you have demanding criteria like categorical coherence that no one would expect to ask of a semantics of intuitionistic logic.

Problematic Integers

I recall running into similar issues while attempting to normalize integers... i.e. it is 'problematic' when there is a representation for both 0 and -0. Somehow I ended up wandering down the road of balanced ternary representations, then decided I had better things to do. I was intrigued to later learn about the use of ternary representations for numeric stability.