## Towards Applicative Relational Programming

This 10-page 1992 article, Towards Applicative Relational Programming by Ibrahim & van Enden, has just appeared on ArXiv, which asked the question of how to combine functional and relational programming.

This is fairly well-trodden ground now, with approaches such as Miller's lambda-Prolog and Saraswat's constraint-lambda calculus being well establsihed, but this paper offers a rather different approach, based on the Henkin-Monk-Tarski approach of cylindric algebras, which were devised as a means of formalising predicate logic in equational logic - I'm familiar with them in the context of modal logic, since they offer a means for handling quantifiers by means of modalities, where [] is used to express forall and <> is used to express exists.

The treatment is nice, and is recommended to LtUers interested in having an arsenal of techniques for bridging the declarative divide.

## Comment viewing options

Your link gives me "Pivotal and Pivotal-discriminative Consequence Relations". This link gives me "Towards Applicative Relational Programming".

Thanks!

### Higher Order (in the functional programming sense)

Is it really true that the declarative approach precludes higher order programming. Here I'm using the term higher order in the sense of first class functions/relations rather than the quantificational interpretation.

Hilog seems to work fairly well and LambdaProlog also has facilities for higher order programming. What exactly are the limitations they are refering to in this paper.

### I'm not sure what they mean

I'm not sure what they mean in this paper, but I think the problem (or should say "trade-off") with higher-order logic (and higher-order functions for that matter) is that they result in extra layers of control at run-time, creating something that is like the control flow we are trying to avoid with declarative programming in the first place. The main advantage of declarative programming, as I see it, is that the text of the program is a pretty close description of how the program executes at run-time. When higher-order features are used, this is not so much the case, and I'm back to simulating the program in my head again to understand it (e.g., to resolve how variables are bound to predicates or functions).

Extending a declarative (logic) programming language with higher-order predicates (or variables that can be unified with predicates) is tempting because of its expressiveness, but such languages really require a lot of clever programming to use. I'm speaking from experience, I tried this before in an LP I designed/implemented, and it was very expressive, but ultimately too difficult for anyone to use but myself.

### I am not sure what you mean

I think the problem (or should say "trade-off") with higher-order logic (and higher-order functions for that matter) is that they result in extra layers of control at run-time

Do you mean that, for example, using map/fold/unfold is less declarative than the first-order solution? Or did I completely misunderstand your statement?

### Yes, using map/fold/unfold

Yes, using map/fold/unfold is less declarative than a first-order (if one is available) solution without control flow. The problem is that the higher-order functions themselves introduce complexity because they are being called somewhere, so there is control flow.

Here is an example of map/filter using Scala syntax without using a for comprehension:

messages.filter(msg => msg.from == myBoss).map(msg => msg.subject);

Now using a for comprehension to get rid of the higher-order functions:

for (val msg <- messages; msg.from == myBoss) yield msg.subject;

In the first example, I have to think about what the higher-order functions are doing; i.e., I have to think about the control flow. The second example, although compiled into the first example, I don't have to think about this control flow so much. This is just a simple example, things really get uglier when the higher-order functions are more complicated.

In a declarative (not functional) language its even worse, because higher-order functions just don't event fit very well into the language. Higher-order predicates are just weird if you have to program in them (personal opinion, not trying to start a flame war).

[subject msg | msg <- messages, from msg == myBoss]
This is pretty much the same as
filter (\msg -> from msg == myBoss) messages.
(Pointless version would be filter ((myBoss ==) . from) messages)
Or is that more functional than declarative?

### declarative

Looks declarative to me, for comprehensions are list comprehensions, and are only named differently because (1) they can operate over more than just lists and (2) the use the "for" keyword :) Anyways, an explicit lambda isn't needed, but they surely occur behind the scenes.

### Higher-order Logic Programming in Prolog

Just as an aside, I found Higher-order Logic Programming in Prolog to be an interesting read. I liked their example #6...

map(plus(X), [2, A, 4], [3, 4, B]).


### Higher-order unification is undecidable

If you're building a logic programming language, you'll probably want to use unification to propagate constraint information, However, if the terms that you're unifying together have binding structure (ie, are lambda terms), then the unification problem becomes undecidable, and even when the solutions do exist, there may not be most general unifiers. (This was Huet's thesis work back in the 70s.)

The "pattern fragment", discovered by Dale Miller, is a very interesting subset of higher-order unification problems that are decidable, and IIUC systems like Twelf and LambdaProlog try to restrict themselves to that.