Lambda the Ultimate

inactiveTopic Daniel Friedman: A Poorman's 'Roll Your Own' Logic System
started 5/22/2002; 10:21:42 AM - last post 6/9/2002; 2:55:20 AM
Ehud Lamm - Daniel Friedman: A Poorman's 'Roll Your Own' Logic System  blueArrow
5/22/2002; 10:21:42 AM (reads: 3158, responses: 9)
Daniel Friedman: A Poorman's 'Roll Your Own' Logic System
This is this Schemer's view of Logic Programming Systems

Related code can be found in Friedman's home page.

Combining (maybe we should say embedding) logic programming with Scheme.

Seems like a companion to the discussion of OOP in Scheme, mentioned here previously.


Posted to Logic/Declerative by Ehud Lamm on 5/22/02; 10:23:10 AM

funzel - Re: Daniel Friedman: A Poorman's 'Roll Your Own' Logic Syst  blueArrow
5/22/2002; 10:37:45 AM (reads: 2385, responses: 1)
Sorry, to be more on the "20 lines OOP track". After hearing this fantastic stories about scheme and lisp, I thought about looking more into those languages again. I'm not very good anymore at reading lisp code. The author claims to have implemented full OO with his 20 lines (presumley to tell others that OO is not a big thing).

But

- his methods dispatcher is a switch statement ? - how does he overwrite inherited methods ?

Could someone explains this to me ?

From my limited knowledge into C, it would be possible to write "C objects" in the same 20 lines of code (perhaps without reflection) with structs and a switch ?

I have another problem. I consider using Lisp for a cross-plattform (but windows centric) development. Which Enviroment can I use for nice GUIs (is there something like MVC "patterns" in Lisp or other strategies for GUI component reuse ?) I would be thankful for hints and links to such an Enviroment.

thanks, bye -stephan

Ehud Lamm - Re: Daniel Friedman: A Poorman's 'Roll Your Own' Logic System  blueArrow
5/22/2002; 10:41:34 AM (reads: 2463, responses: 0)
Please reply to the correct thread, people! (Trying to find things a few months from now shouldn't be as hard as trying to maintain your code )

funzel - Re: Daniel Friedman: A Poorman's 'Roll Your Own' Logic System  blueArrow
5/23/2002; 12:52:43 AM (reads: 2325, responses: 0)
Well, the OOP was mentioned in this post ;-)

I don't know the culture of LtU, but usually replying to old threads doesn't get you any replies.

Sorry, bye -stephan

jon fernquest - Re: Daniel Friedman: A Poorman's 'Roll Your Own' Logic System  blueArrow
5/26/2002; 12:12:04 AM (reads: 2265, responses: 0)
> The author claims to have implemented full OO with his 20 lines
> (presumley to tell others that OO is not a big thing).

*Conciseness* can be a virtue of source code in and of itself. I would doubt anyone who claimed they produced a viable production system in a mere 20 lines of code but that's not what is being claimed. There seem to be two types of conciseness: syntactic conciseness and semantic conciseness:

Semantic Conciseness:

Found in Haskell with its lazy evaluation strategy (expand the expressions in function arguments as needed), Prolog with its depth first search strategy, and strict functional languages (expand the expressions in function arguments before passing them) like ML and Scheme using recursion and higher order functions like "fold".

(Note: Tail-recursiveness, the feature that allows strict functional languages to use concise recursion and control memory usage at the same time, has been included in a very important *production system* the .NET CIL specification.)

The evaluation strategies are the root of conciseness in these languages. At least part of the reason these languages have not yet been adapted in production systems is that their evaluation strategies result in *problems in predicting and controlling memory usage* as well as in *determining exactly what a given piece of code does*. (Look closely at Haskell debuggers.)

(Note: The bulk of EOPL is spent Using continuations to rewrite concise recursive code to the verbose unstructured machine code - imperative with global variables and conditional jumps - of typical hardware - von Neuman machine instruction sets. Memory usage in this machine code can be *predicted and controlled*. What a given piece of such code does can at least be "hand simulated" . No doubt there are also papers that use continuations to reduce Prolog and normal-order-lazy-eval-haskell-like-languages to nice linear machine code also... )

Summary:

  • Prolog (depth first search with unification)
  • Haskell (lazy, delayed, normal order reduction)
  • ML , Scheme (applicative order reduction with recursion and higher order functions)

Pros:

  • Once and for all provably correct (given that you can control the input to the program).
  • Principled and formulaic, maintenance requires recalculating the formula , not patching it with more ad-hoc imperative oo code leading ultimately to code that spirals out of control to the point of unusability (bloat) at which time the code must be thrown away and rewritten (this pattern can be found again and again in the histories of large software development projects such as Microsoft's: see Microsoft Secrets, Cusumano and Selby, 1995).

Cons:

  • Requires slightly mathematically gifted programmers. According to Gabriel this is enough to damn it to commercial/production oblivion ala Lisp.
  • Formulaic is hard to figure out and thus to read.

========================================

Concise Syntax:

Concise syntax usually takes the form of exceptions to an otherwise general rule (power through generality) like Java's "everything is a class" which leads to verboseness but maintains consistency. The verboseness of Java syntax has also led to the movement towards "refactoring" to increase readabillity. (Smalltalk consistent application of the message-passing paradigm also seems to produce a little verboseness.)

Syntactic conciseness can increase readability. It's a lot easier to understand concise Python list syntax than to wade through Java "everything is a class" syntax. See online intro to O' Reilly's new JPython book:

http://www.onjava.com/pub/a/onjava/2002/03/27/jython.html

and

http://www.oreilly.com/catalog/jythoness/chapter/ch01.html

C#'s "foreach" syntax for collections is a lot easier to read and write than the verbose equivalent Java idiom. See C# and Java comparison for this and other examples:

http://genamics.com/developer/csharp_comparative.htm

(Note: Metsker's "Building Parsers With Java" provides some good examples of how "concise syntax" and "concise semantics" are separable by showing what Prolog (along with other mini-languages) is like without concise Prolog syntax (p262). The java code is littered with hard to untangle embedded constructor ("new") calls since "everything in Java is a class" . This code could be said to contain "concise semantics" expressed in a rather "verbose syntax" :

http://www.aw.com/catalog/academic/product/1,4096,0201719622,00.html )

Cheers,

Jon Fernquest bayinnaung@hotmail.com

funzel - Re: Daniel Friedman: A Poorman's 'Roll Your Own' Logic System  blueArrow
5/26/2002; 7:58:14 AM (reads: 2175, responses: 0)
Well, this was a nice post after all. I totally agree with you on conciseness. That's one of the reasons why sometimes I prefer Ruby (which seems even more consice than Python sometimes) over Java.

But, what has this to do with my post ? Do you say that because Lisp is such a concise language he implemented full OO in 20 lines ?

jon fernquest - Re: Daniel Friedman: A Poorman's 'Roll Your Own' Logic System  blueArrow
5/29/2002; 12:25:18 AM (reads: 2194, responses: 1)
> But, what has this to do with my post ? Do you say that
> because Lisp is such a concise language he implemented
> full OO in 20 lines ?

First, the paper is not about "full OO" or better "full featured object systems" . The paper is about "logic systems" , prolog-like systems with depth-first-search/backtracking with unification. The paper is not about "full featured logic systems" either. The paper is about building a "minimalistic logic system with bare essential features", a "roll your own" or "poor man's" system. Such a reduced system can be useful. Microsoft used a small stripped down prolog system for resource defintion stuff in the NT operating system. see comp.lang.prolog for more info. Also look at Norvig's "Paradigms of AI Programming" in Lisp. Many of the useful minisystems he builds there are built on top of the small Prolog interpreter he first builds.

What would a "full featured logic system" look like? Look at the API of SICSTUS prolog or SWI-Prolog. Also notice that they are built on top of the Warren Abstract Machine (WAM) for performance. But you don't need all this for a logic system to be useful if it is *being targeted to a specific domain* .

Second, the paper is not about Lisp, it's about Scheme. Lisp and Lispers sometimes seem like a world unto themselves. Scheme on the other hand is one of the foremost languages used in research on programming languages as well as pedagogy.

Third, from my personal experience, I don't think I really understood how an object system was really built (or how a MOP (Meta-Object Protocols like LISP's CLOS was built) until I read the chapter in EOPL devoted to building a small concise conceptually simple one. In general, when I'm designing any software system I like to start with a concise, conceptually simple prototype (even if it's only in my head). As few lines as it takes to capture the *essential features* . For a production or commercial system any viable company would have to add all sorts of cool features to distinguish and sell their product....so they could call it something other than it really is....or they might have to carry along a lot of legacy stuff like C++ has to because it inherits from C which can make C++ code non-object oriented.... or they might be gluing together things they already have to get the job done quickly.....or someone might be adding features to an already existing system that don't fully understand... when you add time and money constraints you get reality, reality is messy in an infinite number of ways.... that's why small systems stripped down to essential features are enlightening.

For me the most interesting insight came at the beginning of the paper when he made the observation that most rules fit naturally into small related sets and it is usually only necessary to search in one of those sets....an ideal place to apply Scheme's concept of local scope... a lot of time is spent in full prolog indexing all the rules so you can search through *all* of them quickly..

Also related to my discussion of "semantic conciseness" vs. "syntactic conciseness" XSLT's tree-traversal-matching strategy seems to give it semantic conciseness, but the constraint of using XML tags seems to make it syntactically verbose non-standard, and a little hard to read at first.

Ehud Lamm - Re: Daniel Friedman: A Poorman's 'Roll Your Own' Logic System  blueArrow
5/29/2002; 9:50:57 AM (reads: 2249, responses: 0)
Badically, the demonstration is a proof of concept.

jon fernquest - Re: Daniel Friedman: A Poorman's 'Roll Your Own' Logic System  blueArrow
6/9/2002; 2:13:23 AM (reads: 2117, responses: 1)
> Basically, the demonstration is a proof of concept.

It is also a minimal program that fulfill's the specs kind of like "hello world" programs in an intro to computer languages or a "notepad" program in an intro to GUI's. BTW here's an interesting quote about "hello world" programs and conciseness:

"The original hello-world program in the Windows 1.0 SDK was a bit of a scandal. HELLO.C was about 150 lines long, and the the HELLO.RC resource script had another 20 or so more lines. Granted, the program created a menu and displayed a simple dialog box, but even so, leaving out those amenities still left about 70 lines of code. Veteran C programmers often curled up in horror of laughter when first encountering the Windows hello-world program."

"In a sense, the whole history of new programming languages and class libraries for Windows has involved the struggle to reduce the windows hello-world program down to something small, sleek, and elegant." (Petzold, Programming Microsoft Windows with C#, Microsoft Press, 2002, p. 47)

And here's another interesting concise "proof of concept" paper. Building on Prolog all the principle parsers found in parsing natural language are implemented in 20 lines or less:

Stuart Shieber, Yves Schabes, and Fernando Pereira. Principles and Implementation of Deductive Parsing. Journal of Logic Programming, volume 24, number 1-2, pages 3-36, July/August 1995. Also available as cmp-lg/9404008. (The code from the paper is also available.) [PS, PDF]

http://www.eecs.harvard.edu/~shieber/papers.html

Ehud Lamm - Re: Daniel Friedman: A Poorman's 'Roll Your Own' Logic System  blueArrow
6/9/2002; 2:55:20 AM (reads: 2178, responses: 0)
I like the Petzold quote (i.e., I agree with it). I added it to the quotations page.