are rewrite systems really the bee's knees

I get the impression from reading about rewrite systems that in some sense we should all just give up on other approaches and simply do everything as rewrite rules. But of course I've never really used or learned such things, so I assume there are simple and good and probably quickly obvious reasons why s-exprs and c-style syntaxes are still around. I'd appreciate thoughts/experiences/summaries.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

They're Good

For data modeling, TRS is pretty darn cool, though I tend to favor logic programming for said purpose. Where TRS hurts is dealing with Input/Output. From an older comment:

One might note that term rewrite systems (like OBJ and Maude) achieve both "how they act, and how they're interpreted", simultaneously: you can easily define new objects and operations that act as primitives. The cost the term rewrite system approach pays: loss of sharp definition - no 'edges' for types or functions that could readily be used for I/O without shipping the whole program.

Beyond the above, performance of TRS implementations when I was last using them left much to be desired.

TRS is logic programming

For data modeling, TRS is pretty darn cool, though I tend to favor logic programming for said purpose.

In fact, the first constraint programming language system by Wm Leler was built from a TRS.

Do you specifically mean Horn clause logics like Datalog, Prolog, and Mercury?

Lately, I've been thinking that TRS are good for more than just logic programming, but also metalogic programming examples such as Invasive Software Composition, including the creation of composition DSLs; rather than write Java code as in Reuseware, write a DSL in Maude that can tightly define your complex composer's behavior using mathematical reasoning.

Beyond the above, performance of TRS implementations when I was last using them left much to be desired.

Pure supposedly benchmarks well, and Maude's model of computation is well-suited to coarse-grained parallelism; I would like to see the ability to use concurrent, work-stealing deques (such as Cilk and .NET's Concurency and Coordination Runtime (CCR)) and also an Anthill Project-like complex adaptive system for communicating processes. What I would like is the ability to do peep-hole optimizations in a TRS, expressed as orthogonal constraints to the problem domain, for fine-grained instruction-level parallelism. Scheme uses macros to do something similar that Sarkar et al call nanopasses; see A Nanopass Infrastructure for Compiler Education.

TRS and Logic Programming

As I understand it, TRS is more flexible, less structured, than logic programming. As is the typical case, this flexibility bears costs elsewhere.

Logic programming assumes the logic, whereas TRS develops it. Logic programming often hooks a domain database, whereas TRS operates on domain-model variables. Functional reactive logic programming is easy to develop: parametrize a small logic program by a binding to one or more live databases and values. The output is another live well-structured lazy database, suitable for observation or further composition.

Perhaps I'll review TRS again and see if it could be similarly composed. What I need is some good referentially transparent glue for domain modeling augmented by reactivity (live values), and currently I'm using logic programming in that role. The ability for a term rewrite system to develop both the logic and the model could be a win overall - even if it takes a performance hit - if it can be made to compose cleanly.

More structured, more bottom-up

As I understand it, TRS is more flexible, less structured, than logic programming.

It is indeed more flexible (more generalizable), but it is also more structured, not less structured, because it is built bottom-up.

As is the typical case, this flexibility bears costs elsewhere.

Logic programming assumes the logic, whereas TRS develops it.

No argument there... TRS:logic programming :: Scheme:Lisp dialects.

Functional reactive logic programming is easy to develop: parametrize a small logic program by a binding to one or more live databases and values.

What does 'easy to develop' really mean, though? You've tightly coupled to the database and the logic system's interpretation of values in the database, where in the reflective case the values include the logic system's expression grammar. Just a thought...

The output is another live well-structured lazy database, suitable for observation or further composition.

In a general model, it seems like all output is/should be lazy and input would be eager. Scheduling (really, resource allocation) simply inverts this; the programmer says what, and the compiler says when, and peep-hole optimizations are just a special case where the compiler is a human or some AI. Just a thought...

A Matter of Perspective

more structured, not less structured, because it is built bottom-up

Perhaps we have a different perspective of what it means to be 'more structured'. I would say logic programming is more structured because it imposes a certain structure - i.e. an organization of distinct concepts such as a data, a model or query, and the output, and a particular organization for the data and output as sets (presentation order and duplication are of no semantic relevance to initial and derived logical 'truths').

In TRS, you could certainly build all this structure, but it isn't imposed on you. Since the structure isn't imposed on you, TRS is (in my understanding) less structured - it has more degrees of freedom. But I can see how you might say otherwise if you interpret the phrase from the other direction: you control every element of the structure, therefore TRS is more structured.

What does 'easy to develop' really mean, though?

In this case it means it takes only a couple simple functional primitives (e.g. for sets and queries) to achieve semantics for functional logic programming in a manner that will be readily parallelized and optimized and made reactive.

You've tightly coupled to the database and the logic system's interpretation of values in the database, where in the reflective case the values include the logic system's expression grammar. Just a thought...

Well, perhaps that's one way of thinking about it. The glass-half-full version of the same thought is: "You've decoupled the interpretation of the values from the source of those values, whereas in the reflective case every value drags along its full interpretive context". Whether the glass is half-empty or half-full really depends on how much serialization and IO you need. Messaging is considerably easier if the semantics of each message are determined entirely by their structure and recipient, as those are the only two features you can guarantee to be available when processing the message.

As a minor point of protest, I'll also note that a parametric coupling between a database and a query doesn't seem particularly 'tight'. In the reactive logic programming the only 'tight' coupling is to structure and serialization protocol. Both first-class queries and first-class databases may be decoupled parametrically.

in a general model, it seems like all output is/should be lazy and input would be eager

I think input should be demand-driven but with low-latency invalidations. An invalidation does not mean that the contents of a value have been eagerly invalidated... and the new value may still be computed lazily. Is this what you mean?

Perhaps we have a different

Perhaps we have a different perspective of what it means to be 'more structured'. I would say logic programming is more structured because it imposes a certain structure - i.e. an organization of distinct concepts such as a data, a model or query, and the output, and a particular organization for the data and output as sets (presentation order and duplication are of no semantic relevance to initial and derived logical 'truths').

That's silly. Access control lists "imposes a certain structure" as well. Does that mean access control lists are more structured than o-caps?

I'll reply to the rest later.

Access control lists

Access control lists "imposes a certain structure" as well. Does that mean access control lists are more structured than o-caps?

Yes. ACLs demand a TCB and impose a variety of protocol requirements. O-caps are less structured, and correspondingly more flexible in their application, though they do impose a different structure (requiring elimination of ambient authorities). You can't exactly place the two in a total order.

I have never once said that being more structured is inherently a good thing. Local structure involves a cost to local flexibility (though said flexibility may be recovered non-locally). You don't want to pay that cost to flexibility unless you're buying something useful with it (such as better optimizability and partial evaluation, or better serialization and distribution properties, or more predictable composition properties, or better resistance to attack). But that doesn't mean that you're always buying something worthy...

The structure imposed by ACLs only seemed worthy due to the security offered. O-caps, however, can offer more expressive security features with lower maintenance, better performance, and without nearly as much structure, meaning the structure of ACLs was excessive and undesirable.

Keep reading José Meseguer

...and then progress to Manuel Clavel for cool reflective TRS theory

I observe two things about TRS:

1) Has not reached a 'tipping point' in academia where the youngest, best and brightest, want to do something with it as a Ph.D. or master's thesis; in Everett Rogers' Diffusion of Innovation terminology, TRS are still in the Innovators stage, and have not yet progressed to the Early Adopters stage.
2) Q, Pure, *OBJ*, and Maude do not have language inventors who are committed to proselytizing and promoting awareness of the paradigm... as compared with, say, Processing from MIT, which has its own website independent of MIT, geared for community interaction and free exchange of important learning resources

tipping point vs. secret weapon

sometimes i don't want things to become too popular :-) lest they give everybody an edge. are rewrite systems like Maude something one could actually bet money on?

I don't bet money on Languages

I know what Paul Graham has said about Lisp and Python in Beating the Averages.

Today, if I *were* to bet money on a language, it would be Clojure or Scala.

But generally I work in a domain where I can get away with a naive prototype that does a tenth of what a more complicated system does, and iteratively refine that prototype as I use to gather real requirements and educate design. [Edit: The point is that most of the time, I am building for the customer exactly what they want, not some weird visual programming tool with a thousand knobs to tune, like most OLAP products (case in point: Microsoft has recently folded most of its BI stuff into its larger products, because users "don't get it" and its too complex and the only people who want it are people willing to pay $70,000 for it in addition to training costs to use such complex software). My CEO and CTO always instruct me then to take that lesson and come up with a way to fit it into a software product line / software factory so we can repackage it for other classes of users.]

As for Maude, I think it's I/O system needs something more intuitive / less clunkier before it's anything more than an academic executable specification novelty and rapid prototyping DSL tool. For DSLs, I'm not sure where it falls in the spectrum of Oslo, Xtext (ANTLR), or parser combinator libraries.

a disadvantage of rewrite systems

I was once looking at rewrite systems, like yourself. I wanted to create my own rewriting language, thinking it would be a relatively syntactically extensible language, which I could use for everyday programming. Then, I noticed a fatal flaw:

Rewriting systems handle non-local state clumsily. Sure, they do excellent for referentially transparent languages that only require local state (i.e. parameters passed to a function), but they would be extremely inconvenient to use for, say, side-effectful programming, or imperative programming, because there's no obvious, simple way to retrieve global state.

Now that I'm thinking about it, I wonder if there has been any research aimed toward tackling this weakness of rewriting systems.

Bluespec.

Take a look at Bluespec.

It is used to describe parallel systems, such as digital circuits and excels at it.

The main instrument to describe system reaction is a TRS with non-local state (non-local to transformation rule).

(edit. Bluespec was designed by Lennart Augustsson and once had a Haskell-like syntax. Another Bluespec designer was Arvind.)