Fixpoint theory, induction and recursion

I have always looked at recursion very pragmatically. A well formed recursive function is a function which calls itself during its execution. In order to avoid infinite loops, some of its arguments have to be constantly decreasing on each recursive call and there has to be a branch where no recursive calls are made.

Certainly in theoretic computer science fixpoints are introduced and from a mathematical point of view a recursive function is the solution of a fixpoint equation. But diving into the subject deeper there are a lot of interesting aspects in this connection between recursive functions and fixpoints. There is a connection to closures, to induction etc.

I have tried to write down some of these interesting aspects into the little paper Closures and fixpoints. All needed math is expressed within a programming language in a manner that the compiler can verify all derived facts. Maybe some of you are interesting in reading this.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

µ calculus, algebra and projective limits

Good to give fixed points attention. Isn't it nice that

a*b = µx.(ax + b)

i.e. the solution of

X = aX+b

Chapter 4 of the Handbook of Process Algebra is about µ calculus.

I recommend to all interested readers the work by Jan Bergstra et al. on

Kleene star

Hi Andre,

within the expression a*b you probably let "*" denote the Kleene star. Otherwise your expression does not make sense to me.

Indeed

Forgot to write that. And:
ax = a and then x
x+y = x or y

Computatonal Representation Theorem

Best way to understand this is Computational Representation Theorem in the Actor Model. See http://arxiv.org/abs/1008.1459

Fixpoint as a stable chemical solution

Programming chemically means that you cannot have loops or recursive constructs but just a (huge amount) of finite molecules in a chemical solution that will be eventually stabilise in what you 'programmed' it to be the solution.

That's what I've learned while playing with Luca Cardelli's Ambient Calculus.

Recursion considered harmful

I know I will cause outrage here, but I can't hold myself.

I claim that while recursion can be of some theoretical interest, it should be eliminated from the practice of programming. My arguments against recursion as a programming technique are basically the same as against using GOTO:

  • recursive programs are generally much harder to prove, and require certain discipline to ensure termination (as mentioned in the main post)
  • any recursive algorithm can be rewritten without using recursion

While some recursive solutions may look more "natural" and elegant than their non-recursive counterparts, often resorting to using recursion is hard to justify and is more like using a shortcut instead of proper coding.

Unfortunately, many programming languages, especially functional ones, rely heavily on recursion. This should be rectified before FP becomes the dominant programming paradigm.

I know I will cause outrage

I know I will cause outrage here

Don't flatter yourself.

Huh?

Don't flatter yourself.

Do you mean, nobody cares of what I say?

Well, you forgot to say why

Well, you forgot to say why you think that prooving that a recursion terminates is harder than prooving that a loop terminates, it may be obvious to you but not to me..

The methods of proving

The methods of proving programs with loops were invented in the 70s I believe. To formally prove a recursive program, we need to operate with predicates of a higher order than for non-recursive programs.

Leaving the formal methods alone, we can say that understanding the semantics of a recursive program, or designing one, is a task of higher order of complexity than that of a non-recursive one.

Ironic claim about program proofs

I find it ironic that you claim that "recursive programs are generally much harder to prove". What kind of experience in formal correctness proofs of programs have you had? In my own experience with the Coq theorem prover, induction (which is recursion at the proof level, and works brilliantly on functions defined recursively over... recursive data structures) was instrumental in writing the proofs and fairly convenient to use.

I am not aware of general proof assistants making it any easier to prove imperative programs with loops than programs in a pure (including total) language using structural recursion/induction. I would instead expect the contrary to hold: there is excellent work on Hoare-style program logic for verification of imperative programs that have had some successes, but my bet would be that it's still less convenient than a good old structural recursion.

In short, I would guess you're dead wrong on this -- but not outraged.

That said, there is a push in the squiggol-inspired community to use higher-order functionals to encode most recursion patterns (catamorphisms, anamorphisms, zygohistomorphisms...) with related but more convincing arguments that decomposing the computation from the traversal pattern makes for nicer program study and equation reasoning. What this means is "abstracting over stuff is useful", not "recursion is bad".

(I also don't see how you could hope to gain *anything*, correctness-wise, by encoding recursive functions into loops. This is a compilation pass that produces non-obvious code, so if done manually it would rather increase the chance of bugs than decrease it.)

Controversy between functional and imperative programming style

Functional and imperative programming styles have nearly always been discussed as "one or the other". There are proponents who claim that functional style programming with recursion and inductive data types is better and proponent who favor imperative style programming with loops.

I am convinced that we need both.

We need recursion to define the very basic operations. Mathematics starting with Guiseppe Peano and Kurt Gödel have shown us to define the most basic arithmetics like addition and multiplication on natural numbers using recursion. Unfortunately mathematics have not shown us any other way to define these basic operations. Therefore in my opinion it is indispensable to use recursion in order to define the most basic operations.

On the other hand we need imperative style programming with loops to make our algorithms efficient. Loops in imperative style programming are in many cases the same as tail recursive functions of functional style programming. The proof obligations are the same. You have to prove that a tail recursive function returns the same result as the more straightforward recursive function. For imperative programs you use loop invariants and bound functions to prove the equivalence. I personally find loops with invariants and bound functions are easier to understand than tail recursive functions. But this might be a matter of taste.

The programming language which I am currently defining allows both. Recursive functions on inductive (or algebraic) data types and "normal" functions and procedures with loops. In many cases I use recursion to define a function and loops to implement the function.

My articles do emphasize a lot the foundations like recursion, induction etc. because these techniques are necessary to have enough tools to reason about programs. I have already published some articles like insertion sort and heap sort which demonstrate the needed proof techniques for imperative programs.

One comment on coq: Coq is really a great tool. I have done a lot of work in it and I like it because of its sound mathematical definition. But there is a however: Coq is a proof assistant and not a programming language! Even if it allows program extraction. And I see no chance that coq can be used within mainstream programming. You need to understand a lot of theory in order to use it.

My programming language (at least this is my goal) should be usuable without understanding the theory. The insertion sort or heap sort or any other algorithm can be written be any programmer who understands the algorithm. There is no need to understand theory of recursion and no need to understand fixpoints and closures etc. These theories are just necessary to understand the inner workings of the compiler and to design the basic libary which contains naturals, integers, lists, functions and predicates.

On the other hand we need

On the other hand we need imperative style programming with loops to make our algorithms efficient

Proper tail recursion can be implemented efficiently without ever recognizing a "loop", but I agree with some of your other points. Your programming language is the Eiffel dialect you post about occasionally?

Eiffel dialect

Your programming language is the Eiffel dialect you post about occasionally?

Yes. But in the course of the definition I have deviated a lot from classic Eiffel. Therefore I am not sure if it makes sense to call it a variant of Eiffel. But the syntactic style is still Eiffel like. This might be a problem because many mainstream languages do not like this pascal/algol style syntax. Maybe I will change the syntax to look more like a language of the C family. For me the syntax is irrelevant.

OT: syntax (tar pits of any language)

i'd prefer s-exprs over c-style. now with clojure around, there's even a chance that people won't immediately ignore it just because it uses s-exprs. :-)

No, functional programming without recursion is the way

it is indispensable to use recursion in order to define the most basic operations

And how does it translate to programming languages? This doesn't mean recursion is the right way to write programs, does it?

On the other hand we need imperative style programming with loops to make our algorithms efficient.

Well, not really. I have plenty of examples of declarative programs potentially as efficient as the corresponding imperative programs.

definition

We need recursion to define the very basic operations. Mathematics starting with Guiseppe Peano and Kurt Gödel have shown us to define the most basic arithmetics like addition and multiplication on natural numbers using recursion.

Well if we are doing computer programming and not logic, at the lowest level they are defined in hardware. 3+4=7 is a given fact for a computer because the addition operations are primitive operations in the CPU. For example on x86

add src, dest 

replaces the contents of dest with the sum of src and dest. From a language standpoint that is built in, it doesn't need to be defined.

Definition

replaces the contents of dest with the sum of src and dest. From a language standpoint that is built in, it doesn't need to be defined.

The fact that it is a built in operation does not imply that there is no need for a definition. On the contrary. You expect that 3+4=7 and you can test this with some assembler statements. But you might expect that 4+3=7 and you might test this as well. I am sure that you have a lot of expectations from the arithmetic logic unit of your CPU, isn't it? I assume that you won't buy a CPU which returns 8 when given the operands 3 and 4 and the addition operation.

How do you express this expectations? It is some kind of specification, isn't it? Try to formulate the specification of the addition operation. Can you do this without recursion? I have not yet found any definition of addition of two natural numbers without the use of recursion.

missing the point

Try to formulate the specification of the addition operation. Can you do this without recursion?

Assume that for Ints (primitive datatype)

a + b is defined to be
add a, b

Then my expectations for integers (machine independent) is
1) It agrees on primitives: i.e. for a and b ints and toInteger the function that converts Ints to Integers I would want: (toInteger a) + (toInteger b) = (toInteger (a + b))

2) I would want a function frominteger which can convert some Integers back to Int and I would want for any Integer x and Int a:
(fromInteger (x + (toInteger a) - x)) = a

Neither of those are recursive and those specify the properties I'd want.

The recursive definition is important to "define" arithmetic without hardware definitions. With them the need disappears I just want to be consistent with hardware.

Program composition and recursion

In short, I would guess you're dead wrong on this

No, I'm not.

I strongly believe that programs should be built by composition from elementary/already defined parts, and should be read/explained by reversing the process (i.e. by decomposition). Recursion kills both processes: one inevitably comes across things that are not defined yet (or are being defined). The only solution is to use the operational semantics, which is a big no-no to me. Operational semantics is the second worst evil in computing after the side effect-based imperative semantics, IMHO.

Folds

Most finite loops are just map folds on a finite list and those obvious terminate. Complex loops and more complex recursions are equally complex to solve the terminating problem for.

Moreover while it is possible to translate all recursion into loops to do so you have to introduce your own stack. To pick a classic example:'

 treeSum(tree) {
  if tree=nil then 0
  else tree.value + treeSum(tree.left) + treeSum(tree.right);
} 

Requires you to end up managing a lot of the complexity. I don't see any advantage to having programs have to do deal with that sort of nonsense.

As for recursion and FP, recursion is core to FP. Loops fundamentally involve traversing data structures with mutable state variables. So no, it is not going to be dropped.

Loops vs recursion

Complex loops and more complex recursions are equally complex to solve the terminating problem for.

Almost true. However it's always easier to track/limit the number of iterations of a loop than to track/limit the recursion.

[translating recursion into loops] Requires you to end up managing a lot of the complexity. I don't see any advantage to having programs have to do deal with that sort of nonsense.

Well, if you use recursion very often, then probably something is wrong with your approach or the tools/languages you use. The 'classic' example you've given is of course very simple because it 'abstracts' from such real possibilities as infinite trees (where it will fail badly), and a more complex loop-based solution would be more safe and more useful too.

recursion is core to FP

Really? What definition of FP requires that?

Loops fundamentally involve traversing data structures with mutable state variables.

OK then let's not use the word 'loop'. I claim that I can define loop-like computations in a purely declarative manner.

Could you show us a

Could you show us a definition/construction/iteration of the natural numbers that is not recursive?

This is a genuine question. I can't think of one, and google fails me. So, I remain ignorant, but curious.

Induction

Would you be satisfied with such a definition: the natural numbers begin with 1 and every member of the sequence is equal to the previous number plus 1?

and every member of the

and every member of the sequence is equal to the previous number plus 1

This would be expressed in a program recursively, no?

folds cont

Almost true. However it's always easier to track/limit the number of iterations of a loop than to track/limit the recursion.

How? You want to track and exit just do something like:

 trec fun counter limitvalue input = 
   if counter > 1 
      then trec fun (counter-1) limitvalue  (adjustmentFunction input) 
      else limitvalue 
The 'classic' example you've given is of course very simple because it 'abstracts' from such real possibilities as infinite trees (where it will fail badly),

It is meant for finite structures, a left catamorphism. But the point was it would be hard to implement even that simple structure in loops.

Really? What definition of FP requires that?

I am not a computer scientist but AFAIK the Entscheidungsproblem to which lambda calculus was part of the solution was concerned with recursive definition. So the very first definition: a definition of algorithms sufficiently general to express first order problems.

I claim that I can define loop-like computations in a purely declarative manner.

I agree with that. That's the universal property of fold. You express your recursion as a list operation:

g [] = v 
g(x:xs) = f x (g xs)

and then g = fold f v. I'm still failing to see the advantage in not using recursion.

Folds

You want to track and exit just do something like: [...]

That's not what I meant, not an option to track and limit, but a guarantee that a program terminates in a given number of iterations.

It is meant for finite structures

I understand that, but such a simple generalization would make your program immediately more complex (if only you want to make sure it doesn't eat the entire memory).

the point was it would be hard to implement even that simple structure in loops

Well, sometimes doing it right means more effort. This simple example makes some things implicit and therefore creates an illusion that using recursion is more "natural"; it is not often so.

I'm still failing to see the advantage in not using recursion

Again, it's not the matter of getting some immediate advantage, it's the matter of doing it right. Imagine what software would be like had we been using GOTO all these years. Fortunately, recursion is not as widespread as GOTO used to be.

Huh?

That's not what I meant, not an option to track and limit, but a guarantee that a program terminates in a given number of iterations.

Loops don't have that property anymore than recursion does. Either you loop / recurse over something finite or you have something to force finite termination or you don't have a guarantee. There is no difference.

I understand that, but such a simple generalization would make your program immediately more complex (if only you want to make sure it doesn't eat the entire memory).

How? Map the data structure recursively to a stream (make sure that evaluates lazy) and then for the recursion I did:

cata plus
cata  plus  seed (a:as) = plus a (cata seed plus as)  

Then cata is foldr. Fold is the most general form of all catamorphisms. Fold is so general it can handle a huge class of all possible recursion yet simple enough to be implemented in hardware. And at the same time its memory properties are know. There are no surprises of the type you are talking about.

Well, sometimes doing it right means more effort....Again, it's not the matter of getting some immediate advantage, it's the matter of doing it right. Imagine what software would be like had we been using GOTO all these years. Fortunately, recursion is not as widespread as GOTO used to be.

You keep assuming that loops are "right" vs. recursion. That's called begging the question. I think it is time for you to provide some examples of what you are talking about, recursions transformed into simple loops. It is not like this is an unstudied problem. There is a well known class of recursion which can be implemented as left folds which guarantee tail recursion which guarantee finite completion and small memory footprint. If you want to guarantee those things you implement as a left fold, there is no need for loops.

As for the rest, we are still using GOTO after all these years. All the brand new CPUs we are inventing still have instructions that move the instruction point register. And I'd say that excluding instructions that transfer data like MOV, GOTO style instructions like JMP might very well be the most commonly used instruction.

Now what you really meant probably is that we don't use GOTO in higher level code which is still false. To pick an example since you are focused on looping structures I see JAVA and C# all the time with for loops loaded with "BREAK" and "CONTINUE" algorithms which are just fancy names for GOTO. Moreover, the issues with GOTO are that GOTO makes analysis of flow of control hard. You have yet to show anything that recursion makes hard. You just keep asserting that recursion is the wrong way to do things and loops are the right. As if it was well known that there is any problem with recursion.

I don't know whether you are going to reply with an actual example or not. I'd assume not because frankly there are rather strong mathematical proof that such things can't exist. But I am going to assume you are actually interested in this topic and recommend a basic text that covers recursion and the connections between map, fold and covers the fusion results http://www.amazon.com/Introduction-Functional-Programming-Haskell-Edition/dp/0134843460">Bird's Functional Programming. If you aren't willing to buy a book then

1) google fold and get to the point that you understand foldl and foldr. Note the idea of the universal property of fold.
2) Next grok the general idea of a catamporphism.
3) then zip and anamorphisms.
4) then google fold fusion. After that hylomorphism which is the generalization (computer science not greek philosophy). Also "map reduce"

Then you have the class of recursions that have the properties you want.

Summary

Thank you for the informative post. I'm not against folds and catamorphisms if they are already defined in the programming language, I'm against using the general, unrestricted recursion. In fact, my own programming language, Aha!, has both 'foldr' and 'foldl' as built-in constructs.

Either you loop / recurse over something finite or you have something to force finite termination or you don't have a guarantee.

Aha!'s most general "loop" (the 'first' construct) has a clause that forces you to specify the maximum number of iterations before the loop fails. There is a number of other, more specialized, iterative constructs as well (all of them are purely declarative, by the way).

I see JAVA and C# all the time with for loops loaded with "BREAK" and "CONTINUE" algorithms which are just fancy names for GOTO

True, but still the usage of GOTO is very limited - it's not like your program does one thing, then jumps to another, then under certain conditions jumps to the start, etc. We have to admit that the quality of code has improved over the years (since structured programming was introduced), although not to the desired level, of course. Otherwise, it would have ended up in a big disaster long before the Y2K.

I don't know whether you are going to reply with an actual example or not

Sorry, I don't understand what an example you're expecting from me.

You have yet to show anything that recursion makes hard.

I thought I did. Let's summarize my main points against general recursion:

  • it interferes with the compositional semantics and forces to use operational reasoning
  • it makes difficult/impossible (at the language level) to limit the number of iterations and therefore to guarantee termination

OK, let's say recursion just didn't fit into my approach to writing programs. Whether I'm 'dead wrong' or 'live right', time will tell.

a counter

It is enormously hard to stop a loop from being reset. Just have loop x call and reset y which calls and resets x. You need something like a state monad to avoid this whether you use loops or recursion. What you would seem to really need is a super level that sits above your program which is checking loop counts.

So for example write the program but without any ability for functions to call each other (including loops) just lift all functions to a monad. Just to build a simple example if you want your limit to be 100

return a = Left (a,100)
bind f (Left (x,n)) = if n == 0 
   then Right "loop limit exceeded" 
   else (y,m)
       where 
          Left (y,a) = f x
          m = ((max 0 . ) . min) (a-1) (n-1)   

If you want this to be a global limit have main evaluate directly against this monad.

Then you don't worry about structures at all. Every piece of data and every function carries with it data and you have to explicitly raise the counter where you want to reset it. By forcing a reset on the data then everything that touches that piece of data ends up getting incremented down. You avoid complex structures of interlocking calls resetting the loops. Then you just don't allow infinite data structures to exist or be lifted.

Even better IMHO (though there are others here who are far better than I on logical programming) would be something like Prolog where the system tried multiple execution strategies when it has choices so infinite loop paths just get ignored in favor of finite paths and if all paths are infinite the system just gives up after a certain depth.

___

The approach above works if you allow for meta programming. If you don't and just have a fixed number of functions at compile time then you can just have the stack keep track of the calling functions and increment counters.

I thought I did. Let's summarize my main points against general recursion:

You have yet to show me an example of a recursion that's not expressible as a loop. Your argument is that there are problems with recursion that don't exist with loops and my argument is that this is impossible since all recursions are just loops with different syntax.

Aside

For a variety of reasons, your example does not give rise to a monad. Correcting and completing that code produces something which violates the monad laws. In fact, I believe it violates all the monad laws. What you are attempting to do can't be done with a monad in an implicit manner. Some intuition for this is, if your monad is an abstract data type and only provides return and bind as operations to create a value of the monadic type, then all uses of your monad are equivalent to return x for some x. Another intuitive perspective on this is that bind does not give rise to an (observable) notion of a "step". The unit laws forbid this.

You can use monads to do what you want, but you will have to explicitly count the steps, e.g. you could use a state-exception monad and have a tick action.

monad laws

I agree, good point. I was just getting frustrated with there was any distinction between loops and recursion and just wanted to show some code to show you tag data. That absolutely isn't a monad. The monad laws are stateless and don't depend on the order of computation. The whole idea here is that it is vital to embed information about computation strategy into the computations. A state monad makes even more sense. I should have done that and had an external counter.

That being said, I'm still not sure how to force a tick in a lazy language and not violate some laws.

A compromise

After a lot of thinking, I reluctantly admit that I will have to introduce some limited form of recursion to my language, Aha!. This is because recursion is deemed by many as a necessary programming technique, and without it the language is unlikely to be widely accepted.

So, my plan is to introduce a new construct, specifically for recursive definition of a function, 'letrec', in the form:

letrec name: type = closure
to expression
{where statement ...}
end

Here 'name' becomes the name of the recursive function which can be used throughout the 'letrec' statement (i.e. within the closure and each statement after 'where', if any). The 'to' clause limits the total number of times the function can be recursively called, similarly to the 'to' clause in the 'first' construct ("loop") that limits iterations.

There's still no other way in Aha! to evoke recursion.

That way, we can have recursion without the semantic difficulties I mentioned before and without the risk of infinite computations (Aha! requires that every computation must terminate).

What do you think?

You're treading dangerous

You're treading dangerous water. You don't have a construction for mutual recursion, another necessary technique, and calculating fuel for termination can increase constant factors many programmers would resent. If you have folds built in, perhaps you would also build in structural recursion? Ensuring termination with structural recursion means taking extra precautions with how datatypes can be defined (e.g. Coq's strict positivity restriction). Ensuring termination and still having a useful language is tricky at best.

Tricky - yes, but not so dangerous

You don't have a construction for mutual recursion, another necessary technique

Yes, I do. The 'letrec' constructs can be nested, which allows you to define another recursive function, for example, inside a 'where' clause; both functions will be available inside the inner definition.

calculating fuel for termination can increase constant factors many programmers would resent

Well, my goal isn't pleasing everyone. I believe one shouldn't be using recursion without having estimated how many times in total the function is going to call itself. What do you mean by 'constant factors'?

perhaps you would also build in structural recursion?

No, at this stage I won't.

Ensuring termination and still having a useful language is tricky at best.

Yes, and I'm striving to do this trick.

Explicit loops considered problematic

I believe we'd do well to eliminate all explicit loops from programming languages. We can still have an implicit top-level loop for progression of a reactive program or temporal logic, or an implicit loop built into a language primitive 'fold' function for data structures (e.g. an automatic instance of Data.Traversible for every type).

My motivations are composition and extension. Explicit loops are, by nature, closed loops. We can "open" a closed loop by injecting some callback mechanisms, but doing so is awkward, difficult to compose or abstract. If we keep all loops open - open ALL the way to the edges of the program (thus eliminating need for syntax to open or close loops) - then our applications will be correspondingly more extensible - it is easy to declaratively add new behaviors to the loop, to compose two independent periodic or continuous behaviors into one larger periodic or continuous behavior. When folds and 'recursive' data structures are held open, they similarly become more extensible and reusable, easily addressing some common expression problems.

Our languages should also make strong, logical distinctions between looping-over-time and looping-over-space. Failures to make proper space-time distinctions easily lead (depending on mode of failure) to race-conditions, priority inversions, inconsistencies, implicit local state (which is difficult to observe, persist, upgrade, or debug), or space-time leaks.

Explicit loops

I believe we'd do well to eliminate all explicit loops from programming languages

That's a bold claim. I hope you don't mean any of the existing languages and are referring only to newly designed ones?

I have no problems with loops unless they are potentially infinite. To me, a PL construct is semantically sound if one can deduct its end goal from the goals of its compositional parts. What is the end goal of a non-terminating construct?

The Aha! programming language doesn't have general loops, but it has a construct that resembles the while-loop. There are several differences:

  • it is not an imperative construct, but instead an expression that performs search for the first item that satisfies the specified condition in a (inductively defined) sequence
  • it has a clause that limits the number of sequence items to look at; if the condition isn't satisfied within the specified number of the sequence's items, the whole construct fails

There are other constructs that can be implemented using loops, including simple 'folds'.

Our languages should also make strong, logical distinctions between looping-over-time and looping-over-space

Sorry, I don't understand what you mean. Especially in the light of what you just said about loops.

Re: Explicit loops

I mean for newly designed languages, of course. Perhaps I should have said: we'd do well to eliminate all explicit loops from programming.

I have no problems with [... feature X because ...] a PL construct is semantically sound if

There is more to PL design than being semantically sound. I hope, for the sake of your prospective users, that soundness isn't all you consider.

My argument above is not based on soundness, but rather on extension, composition, expression - valuable features for a language and the subprograms and applications developed in it.

To me, a PL construct is semantically sound if one can deduct its end goal from the goals of its compositional parts. What is the end goal of a non-terminating construct?

Not every computational construct has a clear end-goal, and many of them are non-terminating. Consider, for example: simulations, video-games, operating systems, surveillance and security systems, mechanical control systems. In those cases, you would typically frame soundness in terms of progress - e.g. does every step terminate, such that the next step can start?

I expect you'd associate this with an implicit loop external to your Aha! language that accounts for IO. External and implicit is, as I mentioned above, a fine place for loops.

Sorry, I don't understand what you mean. Especially in the light of what you just said about loops.

A loop over 'space' is one that models, generates, or processes a spatial distribution of values - e.g. if we compute a value for every pixel on a screen, or process a set of facts that are true in a given frame. A loop over 'time' is one that models, generates, or processes a temporal distribution of values - e.g. a stream of user-input events. Many programming models (such as the actors model, or procedural languages) do not support robust distinctions between these fundamentally different kinds of loops.

Even when the loops are implicit, the distinction is valuable.

Soundness

There is more to PL design than being semantically sound

Of course, but it's the first thing a language designer should be concerned with. Creating a language without a sound base means creating a potential for innumerable problems. The rest depends mostly not on the language itself, but on the corresponding infrastructure: APIs, libraries etc.

Not every computational construct has a clear end-goal, and many of them are non-terminating. Consider, for example: simulations, video-games, operating systems [...]

It's a common mistake to not distinguish computations and applications. Computations must terminate, applications don't have to. I consider an application a (programmer-defined) object that describes programmatic reaction to various external events.

I expect you'd associate this with an implicit loop external to your Aha! language that accounts for IO.

Exactly. I think associating this with an infinite loop is sort of accidental. I mean, this is just how the existing hardware/OSs force us to think of it. In fact, there is no 'loop' in the sense of iterative computation of something, it's just repetitive execution of some routine that handles the events.

Thank you for the explanation of loops over space and time.

Computations and Calculations

I believe you confuse computation with calculation. 'Computation' is commonly used with a much broader meaning than you're giving it (including both communication and calculation, protocols, signal processing, etc.), whereas 'calculation' has the much narrower meaning of producing a value from inputs. (The word 'application' tends to refer either to user-facing software or the application of function to an argument.)

Most languages and programming models used today are unsound. Soundness is valuable to reason about all possible programs, but in practice we tend to write and reason about specific programs. Robust composition to limit damage from inconsistent or antagonistic components (e.g. object capability model security) is more broadly useful than soundness. Of course, it's better still to have both soundness (locally) and robust composition (broadly).

The rest depends mostly not on the language itself, but on the corresponding infrastructure: APIs, libraries etc.

Good libraries and APIs can enhance a language greatly, but have difficulty addressing cross-cutting concerns. The pervasive is the province of the language designer. It is a mistake to assume other cross-cutting qualities will follow from soundness.

associating this with an infinite loop is sort of accidental. [...] there is no 'loop' in the sense of iterative computation of something

Consider natural language processing of a large file, or rendering a POV-Ray scene. These operations may result in a deterministic value, but require an unpredictable number of steps to complete. So, you might break them into small steps (each of which you can guarantee to complete) distributed over time - leveraging incremental processing of state to compute the result.

At that point, you'll have a "'loop' in the sense of iterative computation of something".

Re: computations and calculations

I believe you confuse computation with calculation

I think the difference is that calculation is based on a (simple) formula, and computation involves also some decision-making (logic). This is my own definition though; according to the dictionary, 'calculation' and 'computation' are synonyms.

Consider natural language processing of a large file, or rendering a POV-Ray scene. [...]
At that point, you'll have a "'loop' in the sense of iterative computation of something".

Yes, such an application is stateful and it may look like the only way to implement it is using a loop in an imperative language. However, you may look at it differently. It can be expressed as an object whose state changes as result of various I/O events and computations, without the need to specify the control flow. This view is more useful, in my opinion, for a variety of reasons. I recommend that you have a look at the more advanced examples on the Aha! web site, such as SortFile.

My view is that a programming construct (loop or whatever) cannot directly interact with the external world in any way because this would deliver it "impure" - there must be a distinct boundary between the pure computation space and the external world where all the side effects occur. Without this distinction, it's virtually impossible to reason about programs, because a program with side effects is effectively a self-modifying program.

virtually impossible to

virtually impossible to reason about [impure] programs, because a program with side effects is effectively a self-modifying program

I disagree for at least two reasons. First, it is quite possible to reason about self-modifying programs - assuming, of course, that the modifications are constrained to support such reasoning, as by type safety or object capability model. Second, impurity is insufficient to model a self-modifying program. You additionally need either local state or an authority to modify one's own source code. Rather than guilt-by-association, I find it useful to be more precise when placing blame. (cf. local state is poison).

can be expressed as an object whose state changes as result of various I/O events and computations, without the need to specify the control flow

I do advocate avoiding explicit loops, and avoiding control-flow semantics in general. Nonetheless, to whatever extent the correct behavior of your application depends on future inputs being influenced by past outputs (whether this be state or query responses), you essentially have an effectful, iterative computation.

Self-modifying

Sorry, I wasn't clear about what I meant by 'self-modifying'. I didn't mean that a program with side effects modifies its source code. I meant that one part of such a program cam implicitly affect the behaviour of other parts of it, which makes reasoning about it a lot more difficult; you can't reason about the parts in isolation.

I do advocate avoiding explicit loops, and avoiding control-flow semantics in general. Nonetheless, to whatever extent the correct behavior of your application depends on future inputs being influenced by past outputs (whether this be state or query responses), you essentially have an effectful, iterative computation.

First, an 'explicit loop' doesn't need to be described in terms of the control-flow semantics (see 'first' construct in Aha!). Second, I distinguish between an iterative computation and an interactive application. An application that (indefinite number of times) reads user's input and processes it is NOT an iterative computation, even if some programming languages (or habits of using them) make you think so. An iterative computation cannot depend on data entered or externally modified during that computation.

Effectively self-modifying

I understood your meaning. You used the phrase 'effectively' self-modifying, which I took liberally to describe any model of self-modifying systems. Consequently, both my counter points apply. (You'd only need to modify source code if you don't have any model of local state.)

I agree that it can be "more difficult" to reason about effectful systems. Though, once you start modeling impure systems using pure languages (e.g. with monads), many of the same issues arise (cf. composition vs. purity). But to say it is "virtually impossible" is an exaggeration.

Similarly, "you can't reason about the parts in isolation" is simply incorrect. That's what modularity is for. If you take modularity to its natural extreme in an effectful system, you get the object capability model, which is very well suited for reasoning about parts of the system and how they may interact.

First, an 'explicit loop' doesn't need to be described in terms of the control-flow semantics

I did not mean to suggest that explicit loops need control flow, and I've certainly worked with loops in dataflow languages and pure languages. But explicit loops are closed loops in any paradigm, due to the essential natures of explicitness and loopiness.

Second, I distinguish between an iterative computation and an interactive application.

It is a useful distinction, to know and enforce that input is frozen during certain well-defined steps in a process. The only problem I have with it is your choice of words.

Clearly, I have a very different definition for 'computation' and 'application' than you. (I believe, but have not bothered to verify, that the same holds of most people.) I'll forget your meaning after any lull in the conversation, and probably argue about it again when you try to push your definition. It's okay to have working definitions, but a working-hard-to-make-it-stick definition might deserve reconsideration. (I've had experiences from the other side of this argument, pushing my own definitions. It's not a winning position for anyone. Everyone loses.)

Pure OCM?

once you start modeling impure systems using pure languages (e.g. with monads), many of the same issues arise

I hate monads. I think they were invented to obtain all the 'benefits' of the imperative programming while remaining 'pure'; the result is that the imperative programming style lives within a purely functional language, defeating the whole purpose of the functional programming.

Is it possible to apply OCM or some analogue of it to functional programming?

OCM and functional programming

I assume by OCM you mean 'object capability model'. I have read of analogs of OCM in pure functional programming. I've not studied them deeply, but my impression was that many treat type-information as a kind of capability and leverage phantom types and other tricks.

One can also explicitly model ocaps in a pure system, though it's difficult to achieve without dependent or dynamic types. In those cases, one models pure routers or controllers to manage flow of information between pure subgraphs.

API

Good libraries and APIs can enhance a language greatly, but have difficulty addressing cross-cutting concerns. The pervasive is the province of the language designer.

I think we should have a small, well-defined and stable core of the language and an extendible standard library/API that addresses those concerns. With Aha!, I don't trust anyone to design the libraries and API for me, they are my responsibility, I just don't consider them part of the language.

No

No, induction is the reverse to recursion.

In my pet language, Aha!, you could write:

naturals = seq 1 then (curr + 1) end

to define the sequence of all natural numbers.

So the mistake (this whole

So the mistake (in the discussion following your original post) then is assuming that if we're not recursing, we're using loops? What you are suggesting is that languages allow induction to be expressed directly?

Anyway, what is 'curr'? A reference to the set of naturals constructed up to some point?

Why would you want to express that?

Is there a need to use other examples to form a discussion around?

'seq'

So the mistake (this whole thread is based on) then is assuming that if we're not recursing, we're using loops?

Possibly yes. My seq construct is one way to define iterative computations in a declarative manner. Note that seq doesn't produce computations yet, only defines what they will be like when requested (i.e. defines them 'lazily').

what is 'curr'

'curr' means the previous item of the sequence. Its type is the same as the type of the first item (present after the word 'seq').

Seems rather complex.

That seems a rather complex way to prove termination for a recursive function. The usual approach is to have some metric of recursion depth which must decrease on each iteration and never goes negative. Boyer and Moore proved (in A Computational Logic, 1978), that the metric can be a tuple of natural numbers (as with paragraph numbers, "1.2.1.4"), which is convenient when you have nested loops or recursive functions.

This is a relatively well settled area of program verification.

Your citation works only for

Your citation works only for first order programs. Higher order programs have a more complicated story for termination that has not been well-settled. Coq has its strict positivity restriction to justify structural recursion's termination, but this makes modularity almost unattainable. There will be no perfect solution thanks to Gödel, but we certainly can make the state-of-the-art better and more ergonomic. This speaks nothing to the method proposed by OP but your comment had too much of a dismissive tone to not reply.

What is complex?

I don't think imposing a limit on the number of recursive calls is a difficult concept to understand, and this is what's important to me. I was not talking about program verification, I was talking about estimating the 'computational complexity' of a recursive algorithm, in order to prevent potentially infinite computations. Automated verification can certainly be used to prove that the estimate is correct and therefore the algorithm isn't going to fail because the limit is exceeded, but this is beyond the scope of the programming language I believe.

Recursion and induction

Great paper. However I've read several papers pertaining to the same FP abstractions(more or less). Furthermore the language used in the foremost areas of that paper was decent (too verbose for my liking).
Provided any programmer is to give up imperative programming and stick to functional or declarative the use of this knowledge was adept.

Having said that I think it's funny to look at any functional programming feature pragmatically, being that most programmers hardly use them. Just like thousands of other programmers I invested in the learning of induction and recursion only to give it up. Additionally, I believe any prototypal/classical language using FP abstractions as a subset of the language's intent is best. Statistics would prove this.