Mercury Vs Prolog

Some time ago, Mr Ehud Lamm posted a link to a short comparison between Prolog and Mercury, I am very interested in this paper. Is there any chance to see it, please?

On the other hand feel free to give yours opinion on this subject.

I heard that mercury is purely declarative in contrary to prolog? Dont u think that prolog is the purer one?

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Paper has new home.

Prolog and Mercury Compared. Also the The Prolog to Mercury transition guide might be of interest.

To the best of my knowledge, Mercury is fairly true to the Prolog model, though it does allow you to specify the unification with a bit more control (and it allows functional programming to boot).

I disagree.

Prolog is a database system built on a string-rewriting computation model. (Think of Markov algorithms here.)

Really, Prolog has little to do with "programming", much less "functional".


I believe the comment was to mercury allowing functional programming.
that said if we can't discuss prolog as a 'programming' language then I think we may have too narrow a definition of 'programming'.

It's not programming... the same sense that "functional programming" is programming.

You can't "add functional programming support" to Prolog, because that would destroy the whole original point of Prolog.

Why not?

Perhaps Curry-Howard needs to be discussed at this point? (Specifications as types and proofs as programs).

Especially proofs as programs "formulae as types" is surfacing quite frequently during type wars anyway.

I even suspect many people think Curry-Howard Isomorphism is limited to "formulae as types". Ah, here is a nice post by Neel Krishnaswami describing interpretation of Curry-Howard Isomorphism for both functional and logic programming.

Too much verbage for me.

Functional programming is based on Turing machines and/or lambda calculus. (If you're lucky.)

Prolog, on the other hand, is more like a fancy Markov algorithm engine.

If you're running on stock consumer processors, this probably doesn't make any real difference. From a larger point of view, though, for the general case the difference might make a huge impact. (I think you can get some results from playing with substitution order.)

Ain't Prolog complete in the Turing sense?

Prolog's never been really classified in the category of FP languages, so I'm not sure why it is a discussion point on whether or not Prolog is a PL. OTOH, Mercury does a pretty good job of showing how to blend the declarative style with the functional style.

And I'm not sure why you use the Markov algorithm engine as a point of derision for Prolog? Heck, BrainF*ck gets Turing completeness in a mere 6 instructions (+2 if you want IO). And the Lambda Calculus is a fairly simple (but powerful) engine as well which can be mapped into a mere two SK combinators (though one supercombinator can technically do the trick).

Anyhow, we'd probably all agree that Prolog is Language. And we'd probably all agree that it isn't particularly well adapted to either Imperative or Functional programming styles. But does that preclude it from being considered a bona fide "programming" language. Guess we have to be a little more methodical about what we mean by "programming" before that matter could be resolved.

"Complete"? Yes.

Being Turing-complete does not mean being fully equivalent to a Turing machine, it just means being able to solve similar types of problems. How the problem is solved, and which approach might be "better" in some sense of the word -- now that is an open question that hasn't really been answered yet.

BTW, you misunderstood me -- solving problems with a different computation model is a fantastic opportunity, not a point of derision.

Don't think a Turing Machine has been built yet

I guess I'm not understanding your statements concerning Prolog. As pantagruel said, any definition of Programming Language that excludes Prolog would be too narrow.

The assumption of having an infinite amount of memory at your disposal makes the engineering of a TM a tad on the difficult side. Programming languages that can be run on the Turing Machine, however, are not that difficult to construct. Even though we may never get around to building a TM, we do have the luxory of building lots of programs that could theoretically run on it.

But then Turing Completeness doesn't really tell us how easy it is to program in a particular language. All it tells us is that any program you build in one language (Scheme, Haskell) can be expressed, however obtusely, in another language (BrainF*ck, Unlambda). Not really helpful in terms of defining conciseness of expression and the ability to construct meaningful abstractions. If conciseness and abstraction are the points of comparison, then the natural question is whether some languages are more concise than others, and whether they have the ability to build layers of abstractions.

Well, for certain classes of problems, Prolog is the most concise language to use and represents just the right level of abstraction for the problem. Trees and deduction are the natural allies of Prolog. This leads to the question of what Prolog is and isn't suited for. Like all PL usage, it's likely that the language is used well beyond the domain of its comfort zone.

The underlying model of a PL can be informative in how one goes about applying the language to solve problems. But it's the abstractions built upon that model and the ability to solve problems that ultimately determine the usefulness of any particular language.

No, that's not the point.

Briefly, being Turing-complete tells us nothing about time- and space-complexity of algorithms, especially when we have bizarre computing architectures. (Imagine massively parallel clusters, for example.)

Since Prolog uses a different underlying computation model from some other popular languages, time- and space-complexity characteristics of Prolog programs might be radically different from, say, C++ or Lisp or whatever.

I forgot what the point was?

Actually, I haven't been able to figure out what you are trying to say about Prolog nor the ramifications of what you are saying. Apparently, I've improperly misread the point you are trying to get across.

I'd say that we agree that Turing Completeness doesn't really tell you much about the usefulness of any language to solve a given problem (especially given the variance in time and space complexity). And I think it rather obvious that algorithms in Prolog are much different than the algorithms in other PL (except maybe the ones that are in the same geneology - Mercury, Oz).

But that still leaves the matter unresolved. What is Prolog good for? And what is it not good for? Is the sweet spot of Prolog so narrow as to make it irrelevant for most domains? Or is it the range much wider than we suspect?

Or are you not so much arguing about the paradigm of logic languages, so much as the specific design decisions of Prolog that may have compromised its usefullness to solve a wider array of problems?

No, the other way around.

Adding "functional programming" support to Prolog destroys whatever advantages Prolog might have had in the first place.

You might get a bit of fun out of the added "expressiveness", but in doing so you remove whatever algorithmic bonuses the Prolog computational model might have had. (Whether or not there is a real advantage to coding in Prolog -- I don't know, and I don't think this issue has been investigated yet. Still, it is worthwhile to keep it in mind.)

How does it destroy the advantages?

From what I gather, Mercury has done a pretty good of keeping the advantages of Prolog whilst seemingly integrating functional programming. If what you are saying is true, then the FP layer in Mercury should have made Prolog style algorithms unpossible in the language.

Another language that arose from the Prolog tree would be Oz, which has a very different engine but shares the concept of logical unification. The Oz community would probably beg to differ that adding multi-paradigm approaches on top of the kernel would amount to destruction. Indeed, I've found some of the most interesting and informative writing on subjects like OOP and Concurrency to be found in this unique corner of the universe.

This thread is getting longish...

...briefly -- if adding FP is in some way detrimental to proper programming, then why bother adding it in the first place?

In effect, I'm advocating "pure" languages as they have predictable algorithmic properties.


tkatchev: In effect, I'm advocating "pure" languages as they have predictable algorithmic properties.

That's a singularly bizarre claim to make, particularly in Prolog's case!

Not really.

"Predictable algorithmic properties" == "has a simple and predictable computational model".

Phase Distinction

tkatchev: "Predictable algorithmic properties" == "has a simple and predictable computational model".

Sure, it's a simple definition, just hard to apply to Prolog. As is well-known, Prolog has notoriously unpredictable CPU cost thanks to its automatic-backtracking-whether-you-like-it-or-not semantics, leading to the (ab)use of the cut operator to curtail "unnecessary" backtracking, which implies encoding some meta-logic to determine when backtracking is "unnecessary" in your application. And the circumstances under which you need to do this are predictable how, exactly? Of course, your point may just be that there are circumstances in which you can predict that Prolog will take hours to resolve a clause, which is true, but not very informative.

By way of contrast, Oz, possibly the least-pure language in the world, supports the definition of your own search strategies, etc. Instead of imposing one approach on everyone, it offers you several out of the box and is open to developing more. Nevertheless, as becomes apparent if you work through CTM, it has a very simple, very predictable semantics, with hidden costs tending to come from the traditional places: bytecode interpretation overhead and garbage collection, the latter of which has well-known, well-documented approaches to avoiding ("resourcing," a form of manual memory management, is often used in Lisp and Smalltalk systems in components where unbounded GC pauses are unacceptable).

Just one quick comment.

"CPU cost" is not at all equivalent to "algorithmic properties".

I don't think there is much value in counting CPU cycles in our age.

A quick response

I don't think there is much value in counting CPU cycles in our age.
If it's just a difference by a small constant multiplier, then perhaps. If it's O(n) vs. O(n*n), then all good luck to you.

Of course, the specific details depend on the application, but I doubt there are practical applications that tolerate exponential algorithms, for example.

That's not what I meant.

There are more difficult subjects to approach than simple time complexity of execution -- what about memory usage, concurrency, scalability, etc.?

Most modern software (the kind that matters, anyways) runs on machines that have different algorithmic properties from a simple Turing tape. (Turing machines don't have to deal with concurrent message-passing algorithms, for example.)

CPU vs. Other Costs

I apologize for being unclear; when I said "CPU costs" I wasn't talking about counting cyclces, but precisely what Andris said: complexity measure in time, vs. complexity measure in space, etc. Prolog quite often imposes unnecessary complexity costs in time, and somewhat less frequently in space.


I don't think anybody has yet shown this to be true.

Algorithmics, as a field, is still in an infantile state. (Most of it deals with Turing machines, which, frankly, is boring enough to be almost useless.)

In Mercury, the I/O predicate

In Mercury, the I/O predicates such as read/write are declarative, as they take the state-of-the-world as an argument and return a new state-of-the-world as a result, while in Prolog these predicates have non-logical side effects.

(Prolog also has "cut", which I believe is non-logical, but I've never used it myself).

I think though that comparing Mercury and Haskell is more interesting, given that they have similar type systems (and Mercury now has type classes with functional dependencies as well). The big difference there is that Haskell is lazy, Mercury is strict.

You are correct that the cut

You are correct that the cut is a non-logical operator. Unfortunately, despite the fact that many Prolog programmers despise it, knowing the search algorithm and using the cut to prune unecessary search branches can make the difference between a program that takes an hour to run versus a program that takes a few minutes to run.

A better solution is to use constraint handling. This limits search trees up front rather than preventing backtracking past a cut. (caveat: this is from an admittedly imperfect knowledge of this field.)

Thanks a lot for links, shame

Thanks a lot for links, shame i didnt find it myslef :|

You say prolog's I/O operation giv4es non-logical side effects, i've red that before, but still what exactly are those effects, and why Mercury by using states-of-the world is avoiding them?

I always thought that the ability to set...

...the mode of determinism was the biggest departure from the Prolog. But as in all things, it depends what you consider to be the fundamental to a language that makes it "pure" vs. "impure". No doubt, side effects are the litmus test of purity for Functional languages. But from a Declarative language standpoint, I'd think that the purity has more to do with how unification is performed. But I suppose backtracking is rather important, so you would have to throw in side effects into the mix.

I mean Input and output is th

I mean Input and output is the aim of I/O operation and both prolog and mercury puts things on screen, but mercury in his predicates use states_of-the-world which makes it pure in contrary to prolog. I dont clearly understant why, does prolog's side effects somehow dont match the programming in logic paradigmat or sth like that?

ok i understand now, thanks

ok i understand now, thanks

Although Prolog is advertise

Prolog is advertised as logic “programming” it is weak as a programming
language because it insists on logic first or logical purity. style="mso-spacerun: yes">  This is too bad for programming because the
ideas of rules and backward chaining control are very useful programming
strategies.  A language called href="">Lewis exists for the
sake of exploring the programming possibilities of rules and backward


is imperative, functional, and logical. 
It achieves this by using rule modes. 
A FORMAL rule is similar to a Prolog rule. yes">  An EVENT rule allows action to be expressed directly in a way
that does not create logical impurity (whatever that may be) by creating and
using “variables”.  We refer to these
variables as modal facts because they are expressed like other facts, but may
not be multi-valued.  An ERASE mode
seems to be all that is needed to give a functional mode of expression. style="mso-spacerun: yes"> 


addition to all this Lewis is light on the logical interpretation. style="mso-spacerun: yes">  Matching is one to one. style="mso-spacerun: yes">  This means that rather complex matching
(unification) that might not terminate is not allowed. style="mso-spacerun: yes">  Also Lewis does not use resolution but Modus
Ponens.  We think this helps behavior
and debugging.  Basically Lewis works
like the simple backward rule systems one can find in most AI books either as
pseudo code or as Lisp. 


all modesty let me say that Lewis is based on a users intuition and a lot of
hacking.  There is nothing “formal”
about it.  However, I have recently
learned of a logical theory called Hybrid logic that seems to provide a very
formal validation.  href="">Hybrid logic page, yes">  Two good papers listed there (PDF) href="">1 , href="">2.


let me say that this is not a sales pitch for a new language. style="mso-spacerun: yes">  Lewis exists for the sake of
discussion.  Although it does work as
far as I can tell.


Please do not paste text from Microsoft Office!

Prolog has the purity of a st

Prolog has the purity of a street whore. It is extremely non-logically pure, for example when one employs a cut that alters the result of a query. This is a common technique for performing iteration, for example. Substantial programmes continue to be written in Prolog, and typically in enhanced variants thereof.

Unification and Backtracking

Prolog has the purity of a street whore.

Haha, nice tag line.

If you subtract out all the ideology and rambling from Hank's post, I think all he's saying is that it can be very useful to embed Prolog-style unification and backtracking in a more traditional imperative or functional programming language. I don't think this is a particularly controversial idea. For instance, I believe Oz has facilities for such things, and Jonathan Blow's experimental Lerp language (designed for game development) is built around a similar idea.

Sure, but I can think of enti

Sure, but I can think of entire applications written in prolog, which gives the lie to the idea that it is not very suitable for programming. It is however unpopular for programming - many people, including me, dislike programming in it, but those who do like can programme up a storm.

Prolog is not logical.

The "logic" or Prolog boils down to a fancy string rewriting system with priorities.

It's only "logical" insofar as you can model a Prolog computational model with logic. (Duh, what a discovery.)

For me Prolog has always been

For me Prolog has always been a puzzle. It uses a complex proof method called resolution apparently for the sake of completeness, but is limited to Horn clauses? How do they justify the complexity of resolution over simple backward chaining?

I don't like the word "proof".

Prolog is nothing but set of term substitution rules with priorities; basically, a Markov algorithm with nested terms instead of strings. (Which is obvious as simple strings are too inefficient in the real world; substituion would require a linear search at each step.)

The fact that you can do some sort of simple logic in Prolog is accidental, IMO.

Abstract logic programming languages

I just want to toss out this term (ask Google). It seems relevant to this and another branch of this thread.

Sure, but I can think of enti

Sure, but I can think of entire applications written in prolog, which gives the lie to the idea that it is not very suitable for programming.

The fact that entire applications can be written in Prolog does not really provide much evidence for its suitability for large-scale software development. There are still people who write large-scale applications in assembly language (an example that comes to mind is Chris Sawyer, who wrote Rollercoaster Tycoon entirely in assembly) but I hope you agree that this is more a testament to the poor mental health of the authors rather than proof of the suitability of assembly language for software development (relative to the available alternatives).

Some Of These Need Better Explanation

NoobProgrammer wrote:

I heard that mercury is purely declarative in contrary to prolog? Dont u think that prolog is the purer one?

There is a subset of Prolog, commonly referred to as "pure Prolog", that is strictly declarative. You can use only that, but sometimes the resulting code is inefficient. That's when you dip into the non-declarative aspects of Prolog.

Most textbooks and significant reference documents will speak of pure Prolog. Purely declarative Prolog is usually presented first to gain a clear understanding of how a declarative language differs from a procedural language. Once the basic premises of pure Prolog are seen, the instructor can then gently bring up the non-procedural aspects of the language.

tkatchev wrote:

Prolog is a database system built on a string-rewriting computation model. (Think of Markov algorithms here.)... Prolog, on the other hand, is more like a fancy Markov algorithm engine...

Prolog uses SLD-resolution (not Markov algorithms):

  • S = Selection function (Prolog "selects" the next goal from a rule body,
  • L = Linear (Prolog selects the next goal from a rule body "linearly", i.e., from left-to-right),
  • D = Definite clauses are used.

Tom Cornell's Prolog coursenotes explain far, far better than I possibly could:

"The resolution step then proceeds as follows. First, we take the current goal clause and we select one of its constituent atoms. In principle, we can select any atom we want, but Prolog will always select the leftmost member of the goal clause. This is called resolution with a selection function. Prolog's selection function applies to a sequence of expressions and returns the leftmost expression in that sequence. This sort of resolution system is called linear resolution with selection function, or SL-­resolution. Therefore you will often see Prolog's particular type of resolution called SLD-­resolution, with the `D' standing for `Definite Clauses'.

And I don't understand your statements:

The "logic" or Prolog boils down to a fancy string rewriting system with priorities. It's only "logical" insofar as you can model a Prolog computational model with logic.

Chris Rathman writes:

Ain't Prolog complete in the Turing sense?

Yes, Lars-Henrik Eriksson has posted a proof.

Hank Thediek asks:

For me Prolog has always been a puzzle. It uses a complex proof method called resolution apparently for the sake of completeness, but is limited to Horn clauses? How do they justify the complexity of resolution over simple backward chaining?

Resolution merely states that given (P or Q) and ~P then Q follows.

A more complex example of resolution:
From ( A or B or C or D ) and (~A or E or F or G) we can infer that (B or C or D or E or F or G). Such is the "complexity of resolution". Perhaps you meant "the complexity of unification"?

And what's wrong with Horn clauses? What would you use instead?

My take on Prolog:

Some people use Prolog fluidly but I am not one of those. I haven't reached a point where I'm "thinking in Prolog". I have experienced that with natural languages but not with Prolog. I hope it happens and, if it happens, then I hope it is real (i.e., I have not gone insane instead). But meanwhile Prolog is lots of work.

To me Prolog has an almost spatial aspect in that, once a program is laid out correctly you can "see" relationships as if in a picture or diagram; what was formerly procedural and dynamic becomes static. In that regard I feel Prolog appeals in some sense to my "right brain" (spatial, artistic). IIRC Richard A. O'Keefe speaks of this spatial aspect at the beginning in his text The Craft of Prolog. Yet the best Prolog programmers (and Richard O'Keefe is an example) seem to be as strongly "left-brained" (linear, language-oriented) as the strongest of procedural programmers.

Instead of using resolution a

Instead of using resolution as an inference rule they might have used Modus Ponens. Modus Ponens is simpler to implement, but is not complete. If the goal is programming the lack of completeness doesn't seem like a problem? Does anyone know why Prolog uses Resolution instead of Modus Ponens? I have several big books on my desk right now about this and I don't seem to find an answer. What is so good about resolution in a logic programming language vs. Modus Ponens?

Resolution uses modus ponens!

Resolution is basically the procedure you get when you combine unification and (a slight generalization of) modus-ponens-based reasoning.

The resolution inference rule contains modus ponens as follows. In classical logic 'P => 'Q is equivalent to 'not(P) or Q'. So if you've got 'P' and 'not(P) or Q', then you can replace it with 'Q'. This is just a special case of the inference rule in resolution, which says that if 'A or X' and 'not(A) or Y' hold, then 'X or Y' holds.

Unification lets you quickly equate things that have to be the same. We need this because first-order logic has variables, and it's necessary to figure out what they might be bound to. The resolution rule for propositional logic doesn't need unification.

Thanks. In a logical or mathe

Thanks. In a logical or mathematical sence they are the same, but modus ponens is a special case of resolution. But the computer codes are quite a bit different. I guess there is no way to answer the question exactly.

I think we have a culture clash here.

As you move from the U.S. towards more and more continental Europe, academic views on CS change greatly.

Where I come from, computation is just a set of not-so-fancy string mangling rules; moreover, even logic and some math tends to be built on top of simple string rewriting axioms.

Same with Prolog -- it is a very simple, very powerful and very flexible string rewriting system. As such, it can be used to model simple logic, (since logic can also be viewed as a sort of a rewriting system) but I don't think the result is pretty or particularly impressive. The real power of Prolog shines when you use it as a toolkit for building more complex rewriting systems.

I'm not arguing. :)

Anyways, this is all just a silly argument about picking the right set of base axioms.