LtU Forum

Objective scientific proof of OOP's validity? Don't need no stinkun' proof.

Having just discovered LtU, I've been reading many of the very interesting topics on here. As a reasonably accomplished programmer myself, I find the discussions on the validity of OOP and OOAD particularly thoughtprovoking, especially those that refer to published articles that speak out against OO.

Having been a programmer in some sort of professional capacity for the better part of a decade, I have made the transition to OO myself, initially with a lot of innertia from my procedural ways.

In my own work and play with programming, I have absolutely no doubt that it is a wonderful way of designing and writing code, and I have myself reaped the benefits of OO that are it's raison d'etre. Sure, much of my work is still done procedurally, but it's done mostly because of the even-driven nature of modern GUI software. But when I create a complex system, I can now only ever think in terms of modelling system structure and behaviour after the real-life system's it's designed to automate/facilitate. OO techniques are perfect for my type of work, and I could never ever go back to pure procedural ways.

What bothers me is this: For me as a programmer, OO is a wonderful, useful thing, that saves me lots of time and mental effort. Why then, are some so vehement in their critique of it? Does that mean that as a former procedural programmer, my ways were so bad that 00 helped to make me better, and if OO is still bad, does it mean that my choice of paradigm brands me as a hopeless monkey?

If OO is so bad, then, is there some other panacea that I am not seeing? Personally, I need no scientific, objective proof that OO is worthwile...I can see and feel the improvement between the old and the new. If other programmers (as I assume the authors of aforementioned articles are employed) are not seeing that improvement, what are they measuring 00 against?

Or perhaps I am simply not grasping the critique properly?

A Lisp to JavaScript Compiler in 100 Lines

JavaScript occassionally gets some air time on LtU and after the most recent discussion of using CSS selectors for hooking up event handlers, I decided it was time to pick up JS. It's immediately quite clear that JS and Lisp have strong ties at the semantics level, so as a first project I thought it would be fun to code a small Lisp to JS compiler, in JS.

It ended up being really small--about 100 lines of code for the core of the compiler. Not quite competitive with a SICP-style metacircular interpreter in lines of code but not too bad either. I thought you guys might get a kick out of it:

http://www.cryptopunk.com/?p=8

Bigloo.NET: compiling Scheme to .NET CLR

Bigloo.NET: compiling Scheme to .NET CLR

We discuss how to map Scheme constructs to CIL.We present performance analyses on a large set of real-life and standard Scheme benchmarks.

Mercury Vs Prolog

Some time ago, Mr Ehud Lamm posted a link to a short comparison between Prolog and Mercury, I am very interested in this paper. Is there any chance to see it, please?

On the other hand feel free to give yours opinion on this subject.

I heard that mercury is purely declarative in contrary to prolog? Dont u think that prolog is the purer one?

A Java/Python hybrid?

Like a lot of recent graduates I'm trained in Java and have had it drummed into my head why the software industry likes a rigid, procedural, statically typed, OO language. (I know not everyone agrees so please don't argue this). I myself like Python which is not statically typed or rigid but is small, elegant and readable. I have been wondering for a while whether a OO, procedural language exists with the rigidness of Java but a more Python like syntax. Something like:

class Stack:
  public procedure add(item: T):
    array.append(item)

  private array : Array(T)

  # etc....

Can anybody suggest something?

The Simplicity of Concurrency

My first post. I've been reading LtU regularly for a while now and it is truly great. Many thanks to all who are involved.

Karl Fant of Theseus Research recently gave an interesting presentation, titled "The Simplicity of Concurrency", on what he believes to be a conceptual model that reverses the traditional view of sequentiality as simple and concurrency as complex. He focuses somewhat on a hardware implementation, but explains how it can be applied to compilers and programming at any level. The presentation is available as streaming audio or audio+video from

http://www.parc.com/cms/get_article.php?id=465

My summary would be:
* Introduce the idea of "data not available" (NULL) as an explicit concept (different from a NULL pointer or Lisp's nil).
* Introduce the conventions that
a) functions receiving all NULL inputs produce NULL output.
b) functions receiving all non-NULL inputs produce non-NULL output.
c) functions receiving mixed NULL and non-NULL inputs do not proceed.
* Given that functions do not proceed until they receive all of their inputs, they become self-synchronizing for one use.
* Blocks of functions can be reset for further use by feeding their output, switched through a NULL-notter, back into the input to form a latch. This concept is easier to explain with a circuit diagram.
* Given that whole systems can be built upon this model to be fractally self-synchronizing, such systems can be partitioned arbitrarily into sequential components such as threads.

Anyone care to correct me or Karl? I am by no means a language guru, so I may have a few concepts crossed.
Do you think this could be a foundation for building an automatic-threading compiler?

New Fortress Specification

I was just doing a search for the Fortress language specification that Sun's working on, and found that they released a new version of the specification (version 0.707) a couple of days ago.

I haven't read over it in much detail, but I've noticed that they made some changes to their interesting "do what I mean" unit manipulation fixups. For example, one of their samples that I questioned before now reads:

w: radian/s := 2 pi radian / 226 million year in s

Of course, parsing this the way they intend breaks a lot of normal mathematical precedence rules, and violates the recommendations of the SI. I'd sure like for them to list the rules that they use to "do what I mean" and perform "context-sensitive disambiguation." In my opinion, such things are equivalent to reading minds. And can "in" also mean "inches"? What's "3 in in in?" Or "1 foot^2 in in in?" Or, equivalently, "144 in in in in in"? Hmmm... It's still my bet that the final language will have to follow normal precedence rules more closely, because these "fixups" tend to paint you into an indefensible, surprising, and ugly corner, and they punish those who understand and write normal mathematical notation perfectly in favor of the dumber people who might make mistakes. I've taken the tactic of "err on the side of being pedantic, and try to teach correct, unambiguous usage" in my Frink language.

As an example, a friend just pointed me to the following scary differences in the Google calculator, which gives results not even having the same dimensions:

1/2 seconds
-1/2 seconds

Try these. Gah! Also see the Frink FAQ about this issue.

Concrete Parse Tree to AST

I parse some language (let's say MS Excel formulas), and I got a concrete parse tree. Are there any specific algorithms to convert it to AST? Are any methods or technics around? Or it is better (faster? simplier?) to build AST directly? Any information is highly appreciated.
Thanks.

XML feed