GRS vs LTRS

Why FPL are (mostly) based on GRS instead of LTRS? Recursive let seems not be so useful, but cyclic links in data makes automatic reasoning very hard and gives another evil source of nontermination.

Perhaps if LTRS was selected we are full of useful total functional languages nowadays.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Clarification

Here, "programming language L based on theory T" means "for any theory T', semantics for programming language L built extending theory T' is much worse than semantics built extending theory T".

Functional sector of popular programming languages (PPL) are not based on Lambda-calculus due to existence of things like sharing. lazy evaluation, data constructors and pattern matching.

TLA

What is GRS, LTRS and FPL?

Dunno, but I can guess

In an LTU context, FPL is almost certainly Functional Programming Language. Likewise the RS in GRS and LTRS is probably Rewriting System, so I'd guess that GRS is Graph Rewriting System. But my Google-augmented guessing gland is not up to working out what LTRS is. L might stand for Lindenmayer or Language or Logic, but I can't make any headway beyond that. It doesn't help that LTR could be Left To Right.

Or maybe it's Grams and Liters.

One to go

If RS is Rewriting System, then the T is probably Term. We're down to one letter now!

Ooh, maybe I have it.

Linear. If we're talking about termination, can linearity be far behind?

And, sure enough, Google finds a handful of links that look promising.

Hmph

And, Google found no links at all for "labeled term rewrite system", which turns out to have been the true acrospansion.

More results when searching

More results when searching this way.

Funny...

I had the exact same thought process. Glad to know I wasn't the only one :-)

Nontermination isn't evil

I always find it amusing to see the word "evil" in an engineering/scientific context. Design decisions aren't evil, of course, and generally neither are the scientists and engineers who make them. The problem with the Oppenheimer's and von Brauns of the world is what they built, not the methodologies and tools they used in the design process.

As I share the confusion concerning the terms "GRS" and "LTRS", I'll pass on that specific question. Nontermination, in general, is of course necessary for general-purpose computation--that said, being able to prove that an algorithm terminates for all inputs is a useful thing. Since all general-purpose languages are Turing-complete, some sort of nontermination must be present--and it's actually pretty tricky to write a PL which is generally useful but not Turing complete.

For something "not evil",

For something "not evil", non-termination is quite insidious and silently harmful.

If we need a non-terminating program, simply hook a terminating program to a continuous or periodic initial input source like a clock or camera. Voila! Done.

I've yet to see a good argument for non-termination other than "it's tricky to make a PL that is useful but [disallows] non-termination".
Reminds me of another saying: "Sometimes we must decide between what is right and what is easy." I believe that we should guarantee termination for as much of the language as we are able, pushing it into the highest layer possible... ideally all the way back to hardware event sources.

Tricky indeed!

it's tricky to make a PL that is useful but guarantees non-termination

Particularly if you disallow side-effects. :)

Edit: More seriously, merely disallowing non-termination probably doesn't enable or even simplify very many static analyses. For example, if your language consists of general constructs but also requires that you provide a separate proof of termination in some powerful logic, what analyses does that enable beyond the obvious termination analysis? If you have a particular analysis in mind, you probably need to restrict to a model of computation that admits that analysis - termination is just one property of many that you might want to establish.

Oops...

On the wording, there. Guaranteed divergence without side-effects is particularly useless.

And static analysis is the only type available to programmers. Guarantees of termination, safety, certain security properties, certain optimizations (like the tail-call optimization), performance properties, etc. are the tools with which engineers work. They support the frame problem - allowing the engineers to know that certain useful properties will be unaffected by a change or composition even in the presence of black-box compositions, variable runtime inputs, and incomplete or changing requirements. Without language support, pretty much everything falls victim to Rice's Theorem.

Termination property is useful, even by itself, since accidental non-termination is a rather common bug due to simple logic errors. Making non-termination and divergence explicit or difficult to achieve by accident is a safety consideration. Admittedly, though, even the Ackermann function terminates... but might take a billion years to do so for relatively small inputs.

Such analyses become more powerful when achieved simultaneously with various other analyses... including for side-effects, determinism, restrictions from certain errors, realtime properties, etc. I.e. I totally agree that termination is just one property of many that you might want to establish.

Various times for analysis

static analysis is the only type available to programmers.

This is false. Automatic program analysis is also extremely useful at these stages:

  • ordering
  • deployment
  • reconfiguration
  • maintenance
  • initialization on startup (last chance)

termination is just one property of many that you might want to establish.

I emphasize that programmer interested actually not in termination proof but in proof of bounded execution time, bounded memory allocation, impossibility of abnormal termination (e.g. due to arithmetic overflow or division by zero).

still static...

Programmers are still stuck performing analysis based on information available to them when developing and/or composing a region of code. This is necessarily static. (It isn't necessarily automatic or deductive or complete, though.)

not static...

Programmers can include in their product analyzer that executes at product configuration time. So from user point of view, termination and absense of exceptions are perfectly guaranteed. All that programmers need to know in development time that there exists some working configuration settings which is passable for builtin analyzer but no more. Obvously such analysis is necessarily automatic. Such kind of analysis is definitely not static (and not completely dynamic, though).

Gatekeeping to prevent bad

Gatekeeping to prevent bad configurations or bad input of other sorts from reaching the main body of code is useful, I certainly agree. It's a pattern I use often. It allows the programmer to enforce his assumptions from 'domain knowledge' that only certain classes of inputs will be used.

But the programmer must still perform static analysis based on what the gatekeeper is intended to allow through. From the user point of view, termination and absence of exceptions aren't guaranteed unless the programmer can guarantee them based upon static analysis using that domain knowledge of inputs.

And all that still doesn't help much with black-box configuration.

"Right" vs "Easy" is an engineering decision...

...and thus dependent on the application.

Not all software is mission-critical. If some browser widget coded in Javascript goes into an infinite loop, as many of them are prone to do, it's not a big deal.

That said, an interesting practical example of a programming system which tries to achieve a balance between being useful and general purpose, but terminating, is the template mechanism of MediaWiki. If users could write wiki pages which did not terminate, it would be a vector for DOS attacks against Wikipedia. MediaWiki/Wikipedia prevents such by not having arbitrary looping constructs, by preventing recursive page includes, and by imposing server resource limits on the complexity of a page render. Also, there's no equivalent to "lambda"--new pages and templates must be explicitly created by a separate action (as opposed to created on the fly as the result of a page being rendered), and permission to do so is restricted to registered users of the system.

Theoretically inelegant, perhaps, but it works. Here, the need to prevent hostile attacks on the server (and Wikipedia has lots of enemies, believe me) trumps programmer convenience and universality. (Trusted programmers, of course, have access to the underlying PHP code, which has no such restrictions).

... until you realize it's a smart

... until you realize it's a smart phone, run out of power because all functionality is frozen, and can't call 911 ;-)

I would submit that this is the responsibility

of the phone designer, not the applet designer.

Mission critical devices should not run untrusted software. Now whether a cellular phone is mission-critical, given its potential for summoning emergency services should the need arise, is an interesting question--probably out of scope for LtU.

There isn't a responsibility on every programmer to bullet-proof every line of code, lest the code be imported in a mission-criticial app. Ensuring the correctness of mission-critical designs is the responsibility of the designers.

In the case of cell phones, I would think that ensuring that buggy (or malicious) code is incapable of rendering the phone in-operational would be a priority. Cell phones platforms do take steps to ensure that buggy or malicious code cannot surreptitiously dial the phone, racking up a customer's bill.

At any rate, phone designers appear to take the position that keeping the battery charged is the user's problem. I have yet to see a phone which degenerates to a "911 only" state when the battery runs low, keeping some reserve power available for emergency purposes (that said, I'm not at all familiar with the gamut of phones out there).

All successful software becomes mission-critical.

Not all software is mission-critical. If some browser widget coded in Javascript goes into an infinite loop, as many of them are prone to do, it's not a big deal.

You say this now, but you never know where that browser is being used, and it is natural that successful technologies will eventually be applied outside their original specifications. At that point, it's a bit too late to change Javascript. An infinite loop in a browser widget that was written to provide remote control of a robot could cause a failed surgical operation (knife didn't stop moving!) or crash a UAV (camera control froze up!) or, yes, kill a smart phone in a medical emergency... because the browser isn't performing properly.

One can hack around many non-termination issues by pulling out the bazooka known as 'process accounting', but killing processes because they're resource-hungry becomes difficult to reconcile with guarantees about consistency, safety, correctness, etc.. It also doesn't much help with secondary effects... e.g. if one knows that a region in a language has no side-effects and always terminates, it becomes highly subject to static partial-evaluation... allowing us to systematically 'flatten' abstractions (trading space for speed). As projects compose more deeply, these guarantees and the partial-evaluation become ever more valuable.

Projects are built by composing other projects. Unless we wish to limit composition, we should be designing our tools to make as many guarantees about such compositions as possible.

You mention Wikipedia/MediaWiki as an example where developers wish to allow code but wish to make certain guarantees about it. It doesn't take much to imagine Smart Wikis that allow users to develop interactive tutorials, perform complex queries, introduce and compose web-applications, allow the development of multi-user interactive fictions, provide for live composition of music, and so on. For each of these, achieving non-termination is very useful to avoid accidental expenses.

Of course, termination doesn't prevent exponential or higher expense, doesn't help with disruption-tolerance or zoomability or accessibility or describing page composition, etc. But every little bit helps.

Choosing the right tool for the job

You say this now, but you never know where that browser is being used, and it is natural that successful technologies will eventually be applied outside their original specifications.

While undoubtedly true, this seems to be as much a problem of poor engineering practices as it is of language design. The structural engineer who didn't check material specifications against domain requirements, and attempted to build a highway overpass out of Lego blocks simply because they were successful in other domains, would rapidly find themselves out of a job (and possibly in court). Similarly, I'd expect that anyone who built a safety-critical surgical application around a poorly understood browser with non-existent safety guarantees to quickly find themselves in hot water.

Having few limits on composition would be nice. But in practice we can't and don't engineer every bolt or resistor to the levels required for building a passenger aircraft. Nor should every piece of software have to be designed on the assumption that it may one day be used in life-critical medical operations. What would be useful in a language design are better ways of describing and understanding the guarantees provided by a particular piece of software, so that developers can make more informed decisions about the selection of projects they're building upon.

Engineers using our

Engineers using our languages are only responsible for achieving a level of confidence in their design that is state-of-the-art within their domain. Language designers are the arbiters of "state-of-the-art". Language designers determine which properties are achieved by construction, which are achieved by bondage and discipline, which are achievable by self-discipline, and which properties cannot practicably be achieved without writing a whole new language or framework. I'm extremely hesitant to blame poor analysis on the language users when the language developers aren't willing to take responsibility for those issues, either!

Having few limits on composition would be nice. But in practice we can't and don't engineer every bolt or resistor to the levels required for building a passenger aircraft. Nor should every piece of software have to be designed on the assumption that it may one day be used in life-critical medical operations.

Language primitives are not materials technology! Duplication of language primitives is nearly free, only costing a few CPU cycles, a bit of RAM, and some mental effort.

While I wouldn't expect users to always build their applications for safety-critical operations, I see very little reason the components with which we build our applications shouldn't be suitable for the same. Language primitives, standard libraries, GUI frameworks, data distribution services and publish/subscribe models, etc. can and should be advanced such that they are suitable in as many applications as possible, including safety-critical applications.

With languages, we really could give every software developer the 'bolts and resistors' suitable for building passenger aircraft. Making this happen is a job for language designers.

What would be useful in a language design are better ways of describing and understanding the guarantees provided by a particular piece of software, so that developers can make more informed decisions about the selection of projects they're building upon.

While that would be useful (the ability to identify invariants, preconditions, postconditions, and other guarantees for operations is very useful), I do not believe it scales very well. Consider how well those 'allocators' in C++ scale when it comes to composition of projects. This approach, taken by itself, is really an invitation saying: "I really can't be bothered with you. Build your own language, including your own set of libraries, if you need a certain property.". The ability to identify features of software components is useful for building another language, but I do not believe application developers should be forced to reinvent language features that are commonly desirable or useful across ranges of applications.

Making certain guarantees common - such as safety, determinism, guaranteed termination for certain regions of code even with composition of other code, support for transactions, various optimizations (recursive tail call), resumable exceptions, object capability model secure distribution of capabilities, transparent or translucent persistence and distribution, databases, etc. - that's scalable. It allows more reuse of the common libraries, and thus far more integration and composition between applications. It's just very tricky for the language designers.

Clarifying the limits of applicability

I'm extremely hesitant to blame poor analysis on the language users when the language developers aren't willing to take responsibility for those issues, either!

Yet the examples you originally cited involved language users (or application users) knowingly using something outside the limits of its original specification. The developers did take responsibility, in that they provided a tool with known capabilities intended for specified uses. If I attempt to use a power supply with a load for which it's not rated, I wouldn't be surprised to have it blow up in my face. So why should I be surprised that a piece of software used in an application for which it isn't designed doesn't function as I expect?

While I wouldn't expect users to always build their applications for safety-critical operations, I see very little reason the components with which we build our applications shouldn't be suitable for the same. Language primitives, standard libraries, GUI frameworks, data distribution services and publish/subscribe models, etc. can and should be advanced such that they are suitable in as many applications as possible, including safety-critical applications.
Demanding that every developer design their components to be used in every possible application or domain seems unlikely to work in practice. You're essentially requiring a substantial investment in non-recurring engineering costs on the off-chance that a component may some day get used in some unspecified application domain. Few companies are likely to take on such a task, and I'm skeptical that many open source projects would tackle it either. To me, it seems more reasonable to expect that developers will produce components that have a well-delineated set of guarantees, from which it will be clear what application areas it is or is not suited to.
With languages, we really could give every software developer the 'bolts and resistors' suitable for building passenger aircraft. Making this happen is a job for language designers.
Indeed. But few people seem to be keen to use Lustre or Esterel to develop browser applications. Perhaps because the constraints required to provide safety and timing guarantees make those languages cumbersome to use outside of their intended application domain.
While that would be useful (the ability to identify invariants, preconditions, postconditions, and other guarantees for operations is very useful), I do not believe it scales very well... This approach, taken by itself, is really an invitation saying: "I really can't be bothered with you. Build your own language, including your own set of libraries, if you need a certain property.".
I disagree. Practically speaking, some components are going to offer stronger guarantees than others. Even if you mandate some common set of guarantees, the odds are that the lowest common denominator of guarantees is not sufficient for some applications. In those cases it would be helpful to have some way of easily identifying the libraries that provide something beyond the "common" guarantees, and of understanding what those additional guarantees are. That doesn't necessarily mean that every project must start fresh: there's a good chance that applications in the same domain will require similar guarantees, and could work to similar standards (in the same way that the Ravenscar profile is used with Ada). A way to clarify the limits of applicability of a particular component would make the assembly of domain-specific libraries of components much easier.

Extent of Knowledge

Yet the examples you originally cited involved language users (or application users) knowingly using something outside the limits of its original specification.

Indeed, Allan. Let's just keep shifting responsibility to the application users. That neatly absolves the application-engineers and the language-engineers.

So why should I be surprised that a piece of software used in an application for which it isn't designed doesn't function as I expect?

Sadly enough, you likely wouldn't be surprised that a piece of software used in an application for which it is designed doesn't function as you expect. That's state-of-the-art today.

While I wouldn't expect users to always build their applications for safety-critical operations, I see very little reason the components with which we build our applications shouldn't be suitable for the same.

Demanding that every developer design their components to be used in every possible application or domain seems unlikely to work in practice.

Good thing I'm not demanding it of every developer, then, eh? Application developers are free to do as they please. The onus is on those developing the tools for widespread use by application developers.

To me, it seems more reasonable to expect that developers will produce components that have a well-delineated set of guarantees, from which it will be clear what application areas it is or is not suited to.

Perhaps, but it is also reasonable to expect that other developers will wish a feature that is not quite provided by that well-delineated set and will end up needing to overhaul the existing design or implementing an entirely new design to achieve it. If they lose another feature, this tends towards a combinatorial explosion of 'choices' for each design... and an enormous amount of rework being carried out repeatedly by developers that really should be able to focus on more useful things, like the applications.

That said, there are perhaps some language designs that can encourage naturally moving the existing designs towards having a common, broader set of goals. I expect, though, that these would require the ability to refactor other people's applications (since significant changes to library properties often require changes to the interface).

Indeed. But few people seem to be keen to use Lustre or Esterel to develop browser applications. Perhaps because the constraints required to provide safety and timing guarantees make those languages cumbersome to use outside of their intended application domain.

Or perhaps the language-designers simply haven't found or sought a way to make them far less cumbersome. I don't know enough about Lustre or Esterel to argue it one way or the other, but I do know that there's a world of difference between bondage-and-discipline type guarantees that cut away possibilities (like manifest typing) vs. constructive proof type guarantees that develop language primitives for certain analyses and search for ways to achieve possibilities (like type-inference). One of the significant distinctions is the degree to which potential reuse of expression is possible - the 'manifest' approaches limit reuse based on whatever is manifest. Which does Lustre and Esterel use?

While that would be useful (the ability to identify invariants, preconditions, postconditions, and other guarantees for operations is very useful), I do not believe it scales very well... This approach, taken by itself, is really an invitation saying: "I really can't be bothered with you. Build your own language, including your own set of libraries, if you need a certain property.".

I disagree. Practically speaking, some components are going to offer stronger guarantees than others. Even if you mandate some common set of guarantees, the odds are that the lowest common denominator of guarantees is not sufficient for some applications. In those cases it would be helpful to have some way of easily identifying the libraries that provide something beyond the "common" guarantees, and of understanding what those additional guarantees are.

I agree with that statement. Where I object is attempting to use this feature as a substitute for achieving a fairly large common denominator. This technique, taken by itself, doesn't scale.

Demanding that every

Demanding that every developer design their components to be used in every possible application or domain seems unlikely to work in practice.

I think he's saying that at the very least, the language should not be imposing restrictions such that otherwise reusable abstractions cannot be reused. For instance, manual memory management imposes severe restrictions on composition.

But composition frequently fails in practical systems

Much of modern software development depends on a myth known as "modularity"--the belief that correct software system can be combined ad infinitum and that a correct software system will result.

This myth, while true in the realm of theory where CPU cycles and memory are infinite, and performance requirements are often ignored, runs aground on the rocks of reality in practice. For many software systems, Moore's law and increases in PL technology (both static program analyses, and extra-language techniques such as tracing GC) makes this myth appear real--and when it isn't real, you can rectify the situation by adding RAM or a faster processor. And of course, in the realm of software which is to run on general-purpose hardware (i.e. PCs), the developer has no a priori knowledge of what machine--real or virtual--the program will run on.

But in other application domains, one finds the combination of a) a fixed (and known) platform, b) documented and frequently critical requirements concerning execution and response times, and c) catastrophic consequences for failure.

Assembling components into systems is the heart and soul of engineering. Some systems are unconstrained, having no knowledge of key system components and minimal consequences for failure; but others are not.

The author above confuses "application users" with "application developers". Certainly, (untrained) users should not need be concerned with detailed engineering knowledge--to the extent that they do engage in any system design and assembly, it should be with "safe" abstractions. Application developers, on the other hand, should be capable of making informed decisions. (It could be argued that SW engineering has not sufficiently involved as a profession, and that "correct" decisions are frequently not made, but this is a social/political issue more than a technical one).

It shouldn't be surprising that what is appropriate for one problem domain, is not appropriate for another. Just as it would be inappropriate for a civil engineer to design a highway overpass with Legos, it would be also inappropriate for a child to be given a truckload of concrete and rebar for playtime. To suggest that Lego Corporation has engaged in irresponsible behavior because their bricks aren't suitable for roadwork projects, is silly. And while I agree that there are vast differences between software artifacts and physical ones; there are many underlying principles of both engineering and economics that apply.

the belief that correct

the belief that correct software system can be combined ad infinitum and that a correct software system will result.

You're right that all possible compositions may not be possible, and certainly aren't possible given the current state of the art, but I don't think there's any dispute that current languages, especially popular ones, are more restrictive than they need to be, and so disallow many useful compositions.

I would also speculate that a language founded on a universal model of composition (monads perhaps?), would allow many compositions that are currently impossible. By this I mean a language in which a reusable library would execute on a stack when composed with serial programs, but execute in say, CPS form when composed with parallel programs.

The only environments which don't currently fit in this model are resource constrained systems, but there's a good argument to be made that this is only because an appropriate semantics for reasoning about time and/or space usage has yet to be discovered. There are some space/time analyses for first-order programs, but extending them has been tricky IIRC.

Then comes the appropriate optimizations, such as fusion and partial evaluation, all of which can be used to meet resource constraints, assuming the algorithms are sufficiently predictable. There still seems to be a lot of work to move program analysis from a science to engineering I think.

Just as it would be inappropriate for a civil engineer to design a highway overpass with Legos, it would be also inappropriate for a child to be given a truckload of concrete and rebar for playtime.

The difference is, here we're dealing purely with information, with mathematics. If the language guarantees certain properties, and allows developer B to infer satisfactory properties of a library written by crappy developer A, then the fact that developer A is crappy is almost irrelevant. I'm not sure such a point is actually reachable with a usable language, but we can get much closer to it.

No need for the impossible...

I'm not asking for infinite composition in the presence of finite resources. Indeed, I'd be ecstatic if the level of composition I could perform was bound primarily by resource limits.

Instead, it is bound by concerns for: reliability, disruption tolerance, failure recovery, deadlock safety, confinement (keeping secrets... secret), capability management, initialization ordering, policy injection, thread management issues (e.g. OpenGL and Win32 both have limitations on which threads can perform certain operations), persistence, dataflow (monitoring for changes in external systems), memory management issues (e.g. in C++ it seems like every framework has a different smart-pointer.)

Modern program composition is, to a large degree, an exercise in juggling frameworks. Most programmers fail at this long before their resources do.

Language designers have the power to provide a great deal more composition. It doesn't always take big changes, either. Even small things, like confining or forbidding side-effects in constructors, or trading 'new' for an abstract constructor that can just as easily return an existing object, can be very useful. Sweeping changes, such as supporting software transactional memory or providing garbage-collection or making 'persistent' the default with inferred volatility being the optimization, can greatly improve composition of different systems. I.e. STM allows isolation in concurrent systems even when interacting with two different frameworks that don't know about one another. GC does the same for sharing of the memory.

Persistence by default simply alleviates from the programmer one of the great, pervasive challenges... and also improves performance and performance predictability in the sense that persistence allows copying GC and can allow an arbitrarily large amount of data to be accessed from a fixed amount of RAM (via a paging system).

Performance is the pervasive concern you described - both in terms of its predictability (time/space) and efficiency. Both of these properties are usually hindered by composition, especially when composing excessively expressive languages. But language-designers can work on answers here, too. E.g. Partial-evaluation is certainly aided by effect-typing and termination properties. And partial-evaluation is the fundamental approach to flattening composed abstractions and removing dead-code on a per-use basis.

The author above confuses "application users" with "application developers".

While I'll admit that the line is blurring nowadays (you've got a group in-between that does app-configuration, plugins, and greasemonkey scripts), I don't believe either Allan or I confused the two.

Application developers, on the other hand, should be capable of making informed decisions.

Choice, or 'there is more than one way to do it', is a sometimes necessary evil, and should generally be delayed (late binding of choices).

Choices tend to grow and grow in combinatorial manners and resist composition. Not only do app-developers have choices, but so do library developers. An app-developer that wants to use two different libraries might be stuck with incompatible choices... unless he or she writes the library again with different choices, or rewrites the library with abstractions on the choices.

Choice abstracted is still problematic because you must still make a choice, and now every app-developer who uses the library must learn a lot about that library to know how the choices affect the properties of the overall system. Having to know every library one uses inside and out doesn't scale for human developers to anything but the smallest of projects. (That said, it's less problematic, since it allows overrides and can have sane defaults.)

I'm okay with leaving choice to app-developers so long as the library developers avoid consuming that forbidden fruit.

Even better, though, would be to leave 'choice' to the language system and compiler. I.e. instead of making choices early when writing the code (which equates to selecting strategies), one states goals and local policies then offers several strategies. At the top-level, one then states default global policies (aka decision heuristics) and requirements (like invariant timing and space requirements). The compiler is free to invent new strategies to accomplish the goals, and may use the initial strategies as fodder. (It may also remember and learn from previous efforts at compiling the same code. And it can ask for programmer help.)

This approach (commonly called Declarative Metaprogramming) is (I believe, but cannot prove) more suitable for automatically adapting code to the sort of realtime and finite space requirements you described above. Oh, and it avoids the various nasty evils that come with early-binding of choice.

while I agree that there are vast differences between software artifacts and physical ones; there are many underlying principles of both engineering and economics that apply.

I've learned in humbling ways that subtle but pervasive differences are enough to undermine just about any sweeping generalizations or analogies.

[Edit: Also, I'm starting to feel guilty about how off-topic this discussion has become. I'll make this my last post in the line.]

Disabbreviation

GRS - Graph Rewrite System
TRS - Term Rewrite System
LTRS - Labeled Term Rewrite System (as in this paper)
FPL - Functional Programming Language (I'm talking only about pure functional programming languages, and all proofs supposed to be done by compiler automatically).

So in LTRS current state of evaluator is dag.

Do not read word `evil'. That was an unfortunate joke. Excuse me. But all others are very serious.

To put in the nutshell, my point is dag rewriting systems is much more promising `base' for pure functional programming languages without manual proofs than graph rewriting systems.

Sharing is evil when done thoroughly

Lamping's partial-sharing graph model of the lambda-calculus, which captured Levy's notion of optimal reductions, is a representation of sharing that seems to be inherently cyclic.

The idea of partial sharing is that you allow subterms with holes to be shared, which allows the critical pairs that prevent there being optimal DAG models of reduction for the lambda-calculus, to be interpreted without duplication of beta redexes. This notion of sharing is quite powerful: you can represent lambda terms of arbitrary size in it a partial-sharing graph with a single lambda application. And here we see why cyclicity is fundamental to these models: because the continuation, function and argument of the application are shared many times, the application itself finds itself wound in many cycles.

I can agree from much personal experience that these graphs are hard to analyse. But the hardness is the price paid for something they can do that DAGs can't.

And a point: term graph rewriting usually means acyclic graph rewriting. I've certainly seen a few cute cyclic representations of self-application, but these are usually dropped in applications. I'm curious as to which cyclic graph rewriting system(s) annoyed you.

Seeming isn't Believing

a representation of sharing that seems to be inherently cyclic

A Spiral is not quite a Cycle. It may 'seem' like one, though, if you're not making certain distinctions. Recursive Path Order (RPO) proofs over LTRS are capable of distinguishing many spirals from cycles, including quicksort.

I'm curious as to which cyclic graph rewriting system(s) annoyed you.

I don't believe 'annoyance' is the right word for it. It's more an issue of the grass being greener over yonder where programs terminate. My own motivation for seeking terminating programs include a wide range of optimizations that will terminate when applied to terminating programs and raising barriers against a common source of bugs when dealing with code distributed to endpoint systems (most non-terminating programs not bounded by an input stream end up that way by accident...).

Not quite a spiral

A Spiral is not quite a Cycle. It may 'seem' like one, though, if you're not making certain distinctions. Recursive Path Order (RPO) proofs over LTRS are capable of distinguishing many spirals from cycles, including quicksort.

Lamping's partial sharing graphs are directed graphs with cycles. There are directed paths from nodes back to themselves.

The equivocal language I used in my grandparent post is due to the fact that there isn't a reasonably adequate, formalised class of term-graph sharing representations that I can point to, not hesitancy about what cycles mean.

I'm just thinking about

I'm just thinking about various program analysis and transformation devices and especially automatic memory management without garbage collection. Cycles complicates things severely without being useful enough. Or functional programming techniques stock is just too young to show the opposite.

More keywords

TDRS -- Term DAG Rewriting System

Sorry for on-topic, but

To put in the nutshell, my point is that dag rewriting is better than graph rewriting as `base' of pure functional programming languages without manual proofs.

Better in the sense easier to reason about: I agree.
Better in the sense we do not need cycles: strongly disagree. There are algorithms that only have a reasonable complexity/efficiency due to a cycle. Without cycles we would need a more complex algorithm to mimic the effect. Cycles are nor used every day, but they do occur in real programs (not just Hamming numbers).

Consider a compiler that translates statements to machine instructions. This compilers yields the statements as well as the actual address for all labels used (in a tuple or record). These positions are feed to the compiler in order to generate the correct jumps. You can do this very elegantly with a cycle. If you cannot make this cycle you will probably need a two stage solution.

Pieter Koopman

"Elegant" Solution

Cross-posting, eh?

Pieter Koopman wrote:
You can do this very elegantly with a cycle. If you cannot make this cycle you will probably need a two stage solution.

I wonder what Pieter Koopman finds to be less elegant with a two-stage solutions. Personally, I find N-stage solutions very elegant, so long as 'N' can be fixed in number without first examining the input.

Elegance must be a matter of philosophical perspective.

Anyhow, I'm slowly reviewing LTRS. It seems promising for termination proofs on both logic and functional programming. I've been fairly content to use structural AST analysis for termination proofs, but flexibility to prove termination on such things as QuickSort may be worth the greater analysis effort to use LTRS.

Not cross-posting

Cross-posting, eh?

No. This topic is about FP foundation, that topic is on adding option to Clean compiler to turn recursive lets and wheres off and leave only nonrecursive ones. Letrec and whererec are only ways to knit cycles in Clean.