Tim Bray on Ruby

How I got here was, two recent pieces of writing that made me think heavily were Ruby-centric: Mikael Brockman’s Continuations on the Web and Sam Ruby’s Rails Confidence Builder... So I went and bought Programming Ruby ('Pickaxe' in the same sense that Programming Perl is the 'Camel book')

The conclusion of this piece is that Ruby looks like more than a fad, so LtU readers who still haven't checked it out might want to do so...

Where are the other editors, I wonder?

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Code Completion

Call me hidebound and conservative, but I think that “optimizing code” and “helping IDEs” (and it’s a whole lot more than just “tooltip help”) are awfully damn important. In particular, as James Strachan has often argued, the combination of a good modern IDE and a statically typed language mean that you hardly ever have to type out a full method or variable name, and even though you might have to write more lines of Java than you would in Ruby, you might get the code written just as fast.

Except for variables it is actually possible to analyze code on class and module level ( or for some syntactically pronounced objects like lists, int literals etc. ) and provide completion hints. But I'dont know if this makes coding faster a lot for experienced Python/Ruby programmers? Maybe that's because I never heard a Python/Ruby programmer complaining about coding speed. It's usually people from Java that are missing fat IDEs for scripting languages. De gustibus non est disputandum.

coding fast

the problem isn't coding fast: it's later mantaining that dirt-and-ugly fastso. it's far easier with powerful expressive languages, since there's so much less code to deal with. with java, you'll likely to change a million lines and declarations. guess code-completion and current refactoring techniques won't help you much...

there was an interesting discussion a while ago on IDEs and completions exactly about coding fast or whatnot.

here it is:
http://lambda-the-ultimate.org/node/view/893

somewhere in the middle...

coding fast

I like java the big advantage is that you have to look less at the code and can look at the declarations to understand. It is better than comments they can be wrong (every programmer seems to be in a hurry)

how is this different from any other language...

... given that they provide sensible APIs?

in fact, i still find a typical Python interface clearer than your typical convoluted java one, specially because of tons of redundant declarations and "enterprise needs" for stupid hungarian naming conventions...

The difference is that a decl

a declaration contains a lot of usefull information and is more formal, is required and is checked and therefor is relaible and superior to a commend. that is true for type declarations but also for exceptions and interface declarations. I do not understand what it has to do with api's. I am not a hardcore java person but i do not like all the java bashing.

reliable

suppose i have this as my sole interface:

def sum( a, b ): pass

yep, i sure feel a lot more confident about passing parameters around when i see

int sum( int a, int b );

after all, in the first case i could do

sum( Food(), Book() ) 

which would probably not do what expected, so i agree with you there.

of course, you could do this

def sum( num1, num2 ): pass

which should be enough to provide you some clue as what to expect

but then, when you take that into the context of OO programming, and you see this interface

class Number( object ):
  def sum( self, b ): pass

and this

public Number {
  public int   sum( int n );
  public float sum( float n );
}

you sure feel it's overkill.

how much more reliable should it be so that the programmer is confident that a method named "sum" from a "Number" should be "summing" the value given to its own value?

you don't even need comments when common-sense and good naming conventions work.

the problem is i have a borin

the problem is i have a boring programming job with a database and other programmers that do not make much commonsense to me. Maybe you are just lucky and only have to deal with nice abstact things like sums.

i wish that was the case.

i wish that was the case.

you just described my coworkers as well... :)

i don't need

i don't need anyone telling me ruby is more than a fad. i've know it for a long time.

in fact, all this Rails buzz is becoming annoying and is attracting some java people to ruby, who are already requesting a more "familiar" syntax to the language, threatening to take away from the language convenient features from perl and other goodies.

ruby's got a nice cross-selection of good features from smalltalk, perl, scheme and even python. i hope it stays that way.

*edit:* btw, just noticed his "?/!/= method-naming tricks" cumpliment comment. guess he never heard of scheme... credit where credit is due...

*edit 2:* my god! "I’ve had access to languages with closures and continuations ... and I’ve never ever written one... I’m still unconvinced that these are a necessary part of the professional programmer’s arsenal." blasfemer! it's like the python guy happy with his for loops and doesn't need proper tail-call recursion... does he even noticed that classes are mostly closure abstractions?

Syntax shmintax

"all this Rails buzz is becoming annoying and is attracting some java people to ruby, who are already requesting a more "familiar" syntax to the language, "

I recall a joke to make Java people comfortable with Python, you can use braces in familiar ways:

def myFn(): #{
    anotherFn(); # superfluous semicolon
#}

Notice you can even add a superfluous semicolon at the end of statements! :-)

ah!

but they'll never be happy unless

def foo( arg ): #{
# Integer arg
bar( arg );
#}

Don't you worry about Ruby ge

Don't you worry about Ruby getting tainted by the Java folks (Or even by the Rails folks for that matter). I've been lurking on ruby-talk for years. Matz is incorruptible.

Where Abstractions Live

Yeah, I saw the comment about not writing closures or continuations, too, and I think it's interesting. It seems to me (and this is far from an original thought) that some of these features might serve best as tools for language implementors, and it's OK if those particular abstractions are themselves abstracted from in the language syntax and, to some extent, semantics.

For example, in A Monadic Framework for Subcontinuations, we learn how this framework can be used to encapsulate effects, basically allowing for an implementation of implicit parallelism, even though a given function may use state internally. But when we read the paper, we learn that this can all be implemented in terms of standard CPS! So it should be possible to implement this basically in any way from a syntactic transformation in your favorite language to being layered on top of that phase of a compiler for a new language.

It's especially interesting, I think, to contemplate this in the context of Zipper as a delimited continuation. From the article:

Zipper is a construction that lets us replace an item deep in a complex data structure, e.g., a tree or a term, without any mutation. The resulting data structure will share as much of its components with the old structure as possible. The old data structure is still available (which can be handy if we wish to 'undo' the operation later on). Zipper is essentially an `updateable' and yet pure functional cursor into a data structure... Our zipper type depends only on the interface (but not the implementation!) of a traversal function. Our zipper is a derivative of a traversal function rather than that of a data structure itself.
I find this very exciting! So it seems we can model arbitrary changes to arbitrary data structures if we can just provide a CPS transform and a traversal interface for the data structure, basically by combining these two papers/articles. Again, it seems like it could be implemented at one of a number of levels of abstraction. A language could provide implicit parallelism using the first paper alone, or it could provide sophisticated data structure modification for any data structure implementing a standard traversal interface. Put the two together and I think you get some real power, and you could do so without exposing delimited continuations or monads to the user. Or, if a design goal is to allow users of your language to do anything the designer can do à la Lisp or Smalltalk, it might be sufficient to provide a hygenic macro facility, a CPS macro, and the Zipper as a module in the standard libraries.

So what I think this all points to is that we're seeing a lot of what has historically been technology that we've seen in functional languages migrating to implementation technology for more mainstream languages, and the mainstream languages are evolving to present abstractions that lend themselves to implementation via these technologies, even if (superficially) they themselves are not particularly functional. To me, this reflects some gradual maturation, both of our understanding of the underlying functional concepts, and of how best to implement any given programming language, be it functional or not.

Migration conjecture

So what I think this all points to is that we're seeing a lot of what has historically been technology that we've seen in functional languages migrating to implementation technology for more mainstream languages, and the mainstream languages are evolving to present abstractions that lend themselves to implementation via these technologies, even if (superficially) they themselves are not particularly functional.

Well, I do not believe in the "migration" conjecture. There may be some prior art but this does not imply that Lisp has somehow influenced Java because it integrated garbage collection previously. This "migration" conjecture seems to be a mythic-heroic element in the FP community: the heretics are the silent power that push all the ideas into the PL realm. Finally the mainstreamers take over and forget their great innovative anchestors. The real history is hidden, the truth is out there etc.

Ideas are cheap at least in software technology. Implement them in a specific context may arbitrary hard or even impossible.

About the zipper. I admit that I do not understand the approach sufficiently but from the description it reminds me in OOP visitors. Since you have to encode the data structure somehow in your traversal function there is nothing magic.

Kay

heretic

this does not imply that Lisp has somehow influenced Java because it integrated garbage collection previously.

Perhaps could it be then that Lisp indirectly influenced java given that James Gosling was a Lisp hacker ( and even coded an emacs version ) and that the java language specification was also worked on by Guy Steele Jr., the man behind Scheme?

Guy Steele

Don't forget CLTL2.

Modulation

schluehk: Well, I do not believe in the "migration" conjecture. There may be some prior art but this does not imply that Lisp has somehow influenced Java because it integrated garbage collection previously. This "migration" conjecture seems to be a mythic-heroic element in the FP community: the heretics are the silent power that push all the ideas into the PL realm. Finally the mainstreamers take over and forget their great innovative anchestors. The real history is hidden, the truth is out there etc.

I have to say that you're reading a bit more into what I said than what it actually says. For one thing, at a certain level of abstraction, it's a simple fact, e.g. Anton's post in the middle of this thread, in which he provides links to a couple of important papers on Static Single-Assignment as an intermediate representation in mainstream (C, C++...) compilers. At another level of abstraction, it's evolutionary: it's public knowledge that people are working on new languages that do, in fact, provide implicit parallelism with local state kept in local heaps, and that the underlying architecture is functional, while the surface syntax and semantics are much more traditionally imperative/object-oriented. Finally, at another level of abstraction, it's speculative: I'm wondering aloud where the line is most appropriately drawn between having, e.g. the ability to impose guarantees of observational equivalence up to a given function call via macros, libraries, and modules at one extreme and making it a language feature, presumably complete with interesting inference processes in the compiler, at the other extreme. I'm also wondering aloud what kinds of interesting results could come out of a combination of implicit parallelism and a zipper that, unlike all other zipper implementations I've seen, only relies on a traversal interface, rather than the implementation of the data structure being traversed. Given that both papers rely on delimited continuations (even though the monadic framework etc. paper doesn't have to!) it seems as though there might be some low-hanging fruit here. That low-hanging fruit may or may not make sense to explore in the context of implementing a programming language. My educated guess is that it would. That's all.

Incidentally, your choice of suggesting that Lisp didn't influence Java is, from a historical perspective, singularly inapt: as others have pointed out, James Gosling comes from the Lisp tradition via Carnegie-Mellon, wrote an early-ish implementation of EMACS, and very definitely and consciously took lessons from Lisp in Java's design. However, it really wasn't until John Rose designed inner classes in Java 1.1 that the Lisp/Scheme influence on Java would become overt. Finally, don't take my word for it; listen to Guy Steele: "And you're right: we were not out to win over the Lisp programmers; we were after the C++ programmers. We managed to drag a lot of them about halfway to Lisp. Aren't you happy?"

The Mocklisper

Maybe it's true Gosling and Steel dragged C++ programmers "halfway to Lisp", while John Rose uncovered this by means of inner classes [1]. One might wonder why Paul Graham dislikes Java? So there is a kind of halfway or a quaterway justice after explicit static typing, dropping macros and functional closures, after checked exceptions, interface inheritance, a broken type system, statement-expression distinction and C++ syntax. Well, of course they dragged C++ programmers - but maybe because they created a VM with garbage collection ( inspired by C programmer Bill Joy ) enabling safe and portable compiled code? Are tons of libraries an argument? After their C/C++ adventures programmers were longing for a domina that bonds and restricts them. Last but not least the Java label is way cool and SUN was the virtual opponent of the other company in the mid 90s.

It's really hard to believe that Gosling is a crypto-Lisper considering the infinitesimal influence of Lisp on Java. But maybe I'm wrong and as a true Emacs hacker Gosling must also be a Lisp attendee. So what is Goslings relationship to Emacs?

One early Emacs-like editor to run on Unix was Gosling Emacs, written by James Gosling in 1981. It was written in C and used a language with Lisp-like syntax known as Mocklisp as an extension language. In 1984 it was proprietary software.

http://en.wikipedia.org/wiki/Emacs

Gosling was a Unix and C hacker who created an Emacs variant, providing a scripting language called "Mocklisp". Never heard about "Mocklisp"? Who's mocking whom?

Richard Stallman characterized it as a programming language that "looks syntactically like Lisp, but didn't have the data structures of Lisp. So programs were not data, and vital elements of Lisp were missing. Its data structures were strings, numbers and a few other specialized things.

http://en.wikipedia.org/wiki/Mocklisp

Here we are. Ten years later Gosling did another try and created MockC++ - and finally became a superstar :)

[1] Inner classes are fine in Java because Java neither uses explicit namespaces like C++ nor enables more than one public class in a file.

How Much Influence is Enough?

It seems to me that because Java isn't a dialect of Lisp you're not prepared to concede Steele's point that they managed to drag a lot of C++ programmers "about half way" there. As an old Lisp die-hard who's slung a lot of Java for a living, all I can say is that the ability to anonymously instantiate interfaces like "Runnable" and pass those anonymous instances to methods that take anything that implements "Runnable" and invoke "run" on them sure feels like higher-order functions and lambda to me. Couple that with inner classes, and writing Swing code feels a lot like writing GUI code in any of the Lisps I've used that had GUI frameworks.

Granted, C++ itself is feeling mighty Lispy these days, what with Boost Lambda, FC++, and Boost Spirit Phoenix floating around...

The point, of course, is that I feel the claim of "influence" is not only supportable in Java's case, but trivial, and I'll make the same claim for functional programming in C++. However, you may wish to make the argument that you're interested in differences of kind rather than degree, and finding the point at which a difference of sufficient degree becomes a difference of kind, e.g. people don't go around calling Java or C++ "functional programming languages," in which case, we agree.

Influenza

Just for clarity. I concede an influence if it causes the existence/style of some particular property that would otherwise not be available or in a much different kind. Causation/influence is fuzzy and interpretable, I know. For example you can overload operator() in C++ enabling function objects that behave like functions as first class citizens. But this is caused by enabling operator overloading without artificial restrictions as for C#, not because C++ is influenced by Lisp. Boost Lambda on the other side is clearly influenced by FP but it resides on application ( user defined ) level i.e. it is a derived property that did not influence C++ language design.

I'm not sure how to weight the distinction application/language level but it is still existent. It might become less important with more powerfull and extendable runtimes in future.

Regarding the "?/!/=" naming

Regarding the "?/!/=" naming tricks; they're not nearly as nice as Scheme, because they start breaking when you use other fairly common idioms. For example,

attr_accessor :valid?

Now you can say "obj.valid?" to test to see if it's valid, but just try to assign to it!

(Answer, for those who don't want to work it out themselves: instead of saying "obj.valid? = true", say "obj.method('valid?=').call(true)". Ruby is full of inconsistencies and surprises like this.)

How is Ruby different?

I tend to get excited about a programming language when it forces me to re-wire the way I think. That is, I want a language to open me up to new (to me) ways of doing things.

I first learned Basic and Pascal. Then C rewired me. Then Forth has a tremendous impact on how I approached problem solving. Then Lisp. Then Smalltalk. Then Perl. Then Tcl. Then Erlang. Then ML. Then Haskell.

I picked up a little Ruby a few years ago, but it didn't stimulate my mind enough to use it for sripting instead of, say: Python or Tcl. And it made me yearn a bit for Smalltalk.

Ruby is a nice language as mainstream scripting languages go. I might even recommend it to Java and C folks. But what would cause Perl or Python (or Tcl) folks to abandon their homeland for Ruby's shores?

How is Ruby different?

my list

The differences that made me a convert were:

1) Blocks. The syntax makes them trivial and the stdlib uses them everywhere. You don't realize how much explicitly writing loops sucks until you stop having to do it. (And the syntax naturally extends itself to things like resource management, e.g. File.open).

2) OO model. Everything is receiver.method_call. It's sooo consistent.

3) Meta-programming. Tacking new methods on to existing classes. Using attr_accessor and friends. Writing your own versions of them. See Rake and Rails for how easy this for making domain-specific languages.

Not new things by any means, but they all seem work so well. The net effect is I rarely feel like you're fighting with the computer, or that I'm doing something dumb just in order to get the thing to work.

blocks in ruby

At last OSCON matz had a presentation on blocks in ruby and the (small) differences wrt lisp's closures and smalltalk blocks.

I find it somewhat shows the main (intended) difference wrt some other scripting languages, i.e. focus on simplicity, writability and productivity (considering python as focused on readability :)

OTOH it does not show the special support for iteration constructs in ruby blocks.

Is it just me, or are the Lis

Is it just me, or are the Lisp examples in that OSCON presentation rather naive?

E.g. he compares LISP and Ruby syntax as follows, under the heading "Syntax matters"
"""
(setq new-list (mapcar (lambda (e)
(do-process e))
list))

new_list = list.collect {|e| e.do_process}

No macro needed
"""

Now why would one explicitly write out the identity function as a lambda here?
I.e. why not (setq new-list (mapcar do-process list))?

It seems the author likes to "see the elements", here represented by the variable 'e'. He does this in several other slides.

Also, what does he mean "No macro needed"? I don't see any macros! (though maybe I'm confused with Scheme)

I agree

I also wonder if

new_list = list.collect do_process

works in Ruby (if do_process is a function).

It does not

This is one of the strangest warts in Ruby (which is already altogether too warty for my taste). Blocks and methods are different. There really aren't any functions per se, so do_process would have to be a method. Blocks can be converted to Proc objects via a rather arcane syntactic flag:

If the last parameter in a method definition is prefixed with an ampersand (such as &action), Ruby looks for a code block whenever that method is called. That code block is converted to an object of class Proc and assigned to the parameter. You can then treat the parameter as any other variable. In our example, we assigned it to the instance variable @action. When the callback method buttonPressed is invoked, we use the Proc#call method on that object to invoke the block. [source]

There's also a method "lambda" defined in the Kernel module (available everywhere) that uses this feature to allow Procs to be defined:

   lambda {|e| puts(e)}

but I think it may actually have slightly different "magic" semantics than a similar method you might define yourself. There's a lot of that in Ruby.



If you've got a Proc object, you can pass it to a method expecting a block by using a syntactic marker. The following seem to be equivalent (note the ampersand):

   l.collect {|e| ...}

and

   p = lambda {|e| ...}
   l.collect(&p)

So you can (sort of) convert between blocks and "first-class functions" in the form of Proc objects. However, you've probably noticed that neither of these are really the same as actual methods. In fact, there is no way to refer to a method directly (i.e., without invoking it) other than via reflection (or eta-expanding it into a block, as above). Method calls can be written with no parentheses, and since a method may take no arguments... (Please note in passing that if you're not careful, you'll also be ambushed by the arcane scoping rules distinguishing local variable references from method references).



So in short, there are three separate concepts here, and they are in some ways convertible, but they're each distinct and cannot be intermingled: methods, blocks, Proc objects. Sorry for the long rant, but this is, in brief, why Ruby is Not My Favorite Language.



(Of course, since there is AFAIK no spec or reference of any kind, much of this is guesswork based on clues found around the web.)

Object#method

However, you've probably noticed that neither of these are really the same as actual methods. In fact, there is no way to refer to a method directly (i.e., without invoking it)...

You probably know of Object#method:

> o = Object.new
=> #<Object:0x2aaaab4654e8>
> def o.double(param) param * 2 end # Add a new method to this instance
> o.double 2
=> 4
> double_method = o.method :double # Get a Method instance
=> #<Method: #<Object:0x2aaaab4654e8>.double>
> (1..5).collect &double_method # Note the &
=> [2, 4, 6, 8, 10]

Sure...

You probably know of Object#method:



I do... But of course, that's reflection, and you ended my quote just before "other than via reflection"... ;)



It's a good point. Ruby does make reflection fairly painless, and obviously that's a core design principle for the entire language. On the other hand, it's still not "referring to a method directly" in a way that, for example, someone used to first-class functions would recognize. Imagining that we have a function f, using Object#method is less equivalent to:

f

and more equivalent to:

(map-lookup (get-current-environment) "f")

or whtever... Both are valuable, but the distinction is pretty important, too, especially when it comes to static analysis, optimization, code refactoring, etc. For instance, in my example, it would be tough to find the second usage of the function "f" automatically, since "f" in that case is just a string. The first usage is trivially apparent. That extends directly to your example as well (although of course in that case :double is a symbol rather than a string).



In any case, this is interesting stuff. While I think Ruby is frustrating and ugly in some ways, I think it's instructive in some ways, too, and I think we'll have lots of fun things to talk about as it gets more popular...

No macro needed

I think what he means is that you don't have to write "lambda" when using an inline function (as opposed to an already defined one). The example is bad, definitely.

Anyway, I think you're mixing Scheme and CommonLisp, since to use an existing function in lisp you'd still write:

(mapcar (function do-stuff) lst)
or
(mapcar #'do-stuff lst)

As for the reason ruby has separate namespaces for methods/functions and variables (like CL), it seem to be a mix of historical reasons and conflict with the chance to use methods withouth parenthesis (the name is "uniform access principle" IIRC), i.e. in

x.foo = y.bar

"foo=" and "bar" are both method calls, not member variable access.

(mapcar #'do-stuff lst) bu

(mapcar #'do-stuff lst)

but you're mixing Lisp with Scheme too since if it were a Common Lisp program the variable wouldn't be called lst :-)

Ruby is a Unix-friendly Smalltalk

Ruby object model is based on Smalltalk, but it has more conventional syntax and libraries, and development is file-based rather than image-based.

Ruby syntax has things in common with Perl but it's much more sane.

which languages were just a fad?

It sure looks like more than a fad to me
I wonder which languages he considers just a fad?

So I suspect that both static and dynamic languages are with us for the long haul
I've read that a process control system written in BCPL has been in use at Ford Europe for the last 25 years - so we might also suggest that untyped languages are also with us for the long haul.

I can understand why we might argue about the different qualities of C++ and Haskell, but it's more difficult for me to see meaningful differences between Python and Ruby and Lua - these seem like different flavours of the same thing.

If it gets into production...

... it's most likely going to be with us for a while. I think the most likely environment for 'fad' languages is in academia, which if you think about it, is a good thing. That's where people need to be experimenting, trying new things, testing them out and moving on. If you're switching to a new language each year in a company, you're probably making a big mess.

Once you get it into production, you are in many ways locked in. The switching costs can be very high, especially considering that maintaining the status quo, once it works, probably isn't that expensive.

I wrote an article about the economics of programming languages here:

http://www.dedasys.com/articles/programming_language_economics.html

Which covers some of this.

differences

I can understand why we might argue about the different qualities of C++ and Haskell, but it's more difficult for me to see meaningful differences between Python and Ruby and Lua - these seem like different flavours of the same thing.

That's because you're comparing a low-level, rigid, ancient language ( C++ ) with a very modern, expressive one ( Haskell ). Ruby, Python and Lua are pretty much on the same level and all employ the same kind of high level constructs, though with different syntaxes and syntax sugar for common operations.

hey, you're that same gouy (sorry the pun) who was complaining about "Freedom vs Safe Languages", aren't you? i wonder why do they all look the same to you...

so no differences?

Ruby, Python and Lua are pretty much on the same level and all employ the same kind of high level constructs
That was my point, there doesn't seem to be a difference worth arguing about.

complaining about "Freedom vs Safe Languages"
Where I opined that the blogger had labelled a bunch of c-family languages as "safe languages" and that's why the languages he defined as "safe languages" all looked the same.

yep, no differences

a skilled developer used to C++ will be programming and taking advantage of either Perl, Python, Ruby, Scheme, Haskell, OCaml or Lua in no time. with the usual idioms.

after all, every language is the same, just with a few cosmetic changes in syntax... right?

why the languages he defined as "safe languages" all looked the same

except, commercial imperative offerings _do_ look all alike. it's far easier to someone used to either VB, ObjectPascal (Delphi), C++ to get to grips with either code than for them to deal with truly different ways of thinking as expressed in those aforementioned languages... it's far easier for a VB guy to understand C++ ( even with templates ) than Haskell or Perl.

Their "if" conditionals, "for" loops, function and procedure definitions and block-structured syntax look very much the same. Most of these people are not willing to giveup on said familiar constructs and embrace recursion, continuations, closures, first-class functions and related concepts.

and fp languages look like fp languages

after all, every language is the same, just with a few comesmetic changes in syntax... right?
not worth responding to

commercial imperative offerings _do_ look all alike
Because they are imperative, not because they focus on "safety" - the developers of Clean do focus on "safety" and Clean does not look like a commercial imperative offering.

Perl/Python/Lua control

Perl has closures but only provides stack-like control. Python has very limited generators. Lua has first-class coroutines, which are equivalent to non-reusable continuations. Those differences are not trivial at all.

Do we really need more imperative languages?

After all the time I spent in this site reading papers and trying out things, I think the most important quality of a programming language is 'correctness'. I don't want to start another language war, but does the world need another imperative programming language? Isn't Java fine? I think that it would be better if more research was done into how to optimize functional programs or develop calculi that will allow the transformation of the functional specification to imperative ones (so as that FP reaches imperative performance) than another iteration of C++/Java/Perl/whatever that improves upon the previous instance by a, let's say, 5%.

(I would also like to complain about XML, but it's offtopic).

Do we really need more languages?

I also don't want to start another language war, but does the world need another programming language? Isn't Lisp fine?

(I would also like to complain about XML, but it's still off-topic).

Well, yes it does.

Lisp is dynamically-typed, and dynamic typing is braindead. (No offense, but still...)

Also, despite the "code-as-data" hype, Lisp is still very much mired into the linear-list mindset, which is also braindead. (Compare and contrast with Prolog -- which also features "code-as-data", but with proper hierarchical structuring.)

code-as-data

"Compare and contrast with Prolog -- which also features "code-as-data", but with proper hierarchical structuring."

so what? it's not half as general-purpose a language as lisp is.

and no, dynamic typing is not braindead, you're probably just not using it to your advantage. besides, Common Lisp provides means to declare types and compile to native code.

No way.

Actually, Prolog is much more general-purpose than Lisp.

For example, you can implement a half-way decent array implementation in pure Prolog, whereas in Lisp this is impossible without dropping down to assembly-level hacks.
The same goes for n-ary trees and tuples. (I'm not even going to mention the configurable parser built into Prolog, as that would destroy the whole point of Lisp in the first place.)

And yes, dynamic typing is braindead. There is absolutely nothing you can do with dynamic typing that cannot be done in a more efficient manner statically. The only benefit to dynamic typing is ease of compiler implementation, but that would be, indeed, a truly braindead way to reason about programming languages.

ok

ok, sure implementing arrays is the key concept behind general-purpose languages.

"configurable parser built into Prolog, as that would destroy the whole point of Lisp in the first place"

why would a builtin configurable parser destroy Lisp, a language which can itself implement a Prolog parser out of nowhere in a few hundred lines of Common Lisp, as Paul Graham pointed out?

"There is absolutely nothing you can do with dynamic typing that cannot be done in a more efficient manner statically."

by efficient, i'm guessing you're talking about machine efficiency, not programmer efficiency.

"The only benefit to dynamic typing is ease of compiler implementation"

and i thought it was ease of programming. silly me...

Re:

Point-by-point:

- Implementing arrays is just one metric of "generalness" which shows how Prolog is more general than Lisp.

- Prolog has a configurable parser that is capable of handling prefix, postfix, infix or whatever-fix syntax the programmer desires, while at the same time sacrificing absolutely none of the "code-as-data" goodness. Lisp-like syntax is a braindead shibboleth.

- A few lines of Prolog can implement Lisp from scratch, what's your point? Sure you can implement a parser with any Turing-capable language, that isn't really the point. The point is that the design decisions that went into Lisp are essentially outdated and braindead by modern standards.

- By "efficient", I am actually talking about both. This is a simple question of proper compiler design, nothing more. Another fat minus for Lisp.

- Ease of programming is orthogonal to the static/dynamic debate. Any typing system that can be implemented dynamically can also be implemented statically. Meaning, whatever your preferred dynamically-typed, easy-to-program language is, you can write an equivalent compiler that does the same statically. (Pretty much, I'm not daring enough to voice that as a mathematically-strict statement. :))

fwd:

I find prolog's data-driven, facts-and-rules-based computational model both tiresome and annoying.

"Lisp-like syntax is a braindead shibboleth."

which syntax do you refer to: the parenthesis or the prefix notation? other than that, it's mostly just function calling... what's so braindead about that? it's annoying because you can't code in notepad?

"A few lines of Prolog can implement Lisp from scratch,"

of course, since it's got such a small core.

"The point is that the design decisions that went into Lisp are essentially outdated and braindead by modern standards."

yeah. that's probably why more and more mainstream languages took a clue out of it, rather than from rules-based prolog. including ruby.

" By 'efficient', I am actually talking about both."

yea, sure.

"Ease of programming is orthogonal to the static/dynamic debate."

i don't think so.

That's not an arguement.

Lots of people find Lisp 'tiresome and annoying'.

That's not an argument to take seriously.

The point is, Prolog contains absolutely every positive point about Lisp while doing away with most of the design failures that got incorporated into Lisp. (And there are lots.)

This is not to claim that Prolog is the perfect PL; on the contrary, this is to counter the braindead Lisp propaganda.

Lisp is a quick-and-dirty language that does things cheaply instead of doing them correctly, and this is a fact you cannot deny.

Some people are "OK" with the quick-and-dirty approach to getting things done, but remember that they are not in the majority.

Really?

Prolog contains absolutely every positive point about Lisp while doing away with most of the design failures that got incorporated into Lisp.

If you want to make this claim around here, I suggest being more concrete. What are these "positive points"? How about high order function, for example, or are they a "failure"?

For the record, I like both languages.

Prolog doesn't have functions.

Though you can construct and evaluate terms in any way you see fit.

For example:

ev(A,true) :- A, !.
ev(A,false).

?- A=1+2*3, B='+'(1,'*'(2,3)), C='='(A,B), ev(C,R). 
R = true

yes
?- read(A), B is A.
1+2*3.
A = 1+2*3
B = 7

yes
Notice the way external representation, internal representation and execution model are clearly separated. (Something which is sorely lacking in Lisp, for example.)

As far as I am concerned, the positive points of Lisp are:

- Anonymous functions. - Garbage collection. - "Code-as-data".

None of which are unique to Lisp in any way.

high points

- Anonymous functions. - Garbage collection. - "Code-as-data"

how about conciseness? at least in comparison to the very verborragic data-driven prolog syntax...

Prolog doesn't have functions

well, that's too bad. one of the most basic units of abstraction and encapsulation is not supported...

perhaps that's why every time i see a prolog program it seems so plain and verbose... it's loads of term declarations. it's about as declarative as xml. in fact, it also reminds me of haskell function declarations as well...

that said, i promise i'll take a more indepth look into prolog just to verify your strong claims. who knows? maybe i'll be a convert... :)

Well,

"Conciseness" is an aesthetic term, not a technical one.

I find programming in Prolog extemely concise, but on the other hand, I realize this is only because I'm familiar enough with it to code concisely.

As to functions -- that's a disagreement in terminology. Prolog is a language based on pattern-matching, and thus you can't really "return" values. (Because there is no notion of "results".)

And anyways, all this isn't an attempt to "convert" -- I'm simply pointing out that the design decisions of Lisp are peculiar and controversial enough to drive away at least one person who is well-versed in the "Lisp mentality". :)

which

"design decisions of Lisp are peculiar and controversial enough"

as Ehud pointed out, which exactly?

*edit*:
"'Conciseness' is an aesthetic term, not a technical one."

still, it's one major point, and should not be desconsidered just because has no technical basis. that's the same as saying we should all be programming in assembly because it technically performs better.

there's more value to a language than its implementation details and indeed i guess much social, economic and indeed aesthetical factors ( "I code in language x because it suits my taste" ) come well before any technical arguments for most people...

and yes, we should not forget languages are specifications, not implementations...

Like I said.

In particular, I abhor s-exprs as external and internal representation and dynamic typing.

There are probably other, more minor, points, but these three are enough to blacklist Lisp for me personally.

W.r.t. conciseness -- the code I write in Prolog is significantly more "concise" than the code I write in Scheme. Except that this means nothing apart from the fact that I know Prolog better than Scheme.

I am going to be blunt

What you (or me) "abhor" or "personally" don't like can be mentioned from time to time but isn't really what people come to LtU to read about. Taste is important, but evidence and arguments carry more weight.

The current thread isn't really about Prolog or Lisp, by the way.

Too many recent threads are along the lines of "I prefer language X over language Y". This is definitely not what LtU is about.

I call on everyone to please try and keep the discussions focused, cite relevant examples and research, and leave religious wars and insults for other sites.

It seems to me that the discussion of Lisp and Prolog has outlived it usefullness for this thread, so if you want to continue this discussion, I suggest starting a new thread.

I thank you all for your consideration.

Exactly my point.

You summed up nicely what I was saying above, thanks.

worse is better

"Lisp is a quick-and-dirty language that does things cheaply instead of doing them correctly"

you've probably already read this, but anyway:
http://www.jwz.org/doc/worse-is-better.html

of course, the comparison is to C, still...

"Lots of people find Lisp 'tiresome and annoying'"

yeah, i guess nothing can suit everyone's tastes.

It's exactly what I had in mind, actually. :)

Still, what worked for C seems to have failed for Lisp.

As to why -- I'm not sure...

don't worry

what worked for C also failed for prolog as well. :)

Static typing with explicit type declaration is wordy

I agree that static typing is nice, but Scala's way of doing it with implicit typing declaration where the compiler can deduce the type for the vast majority of cases is *much* better!

Sure you can have an IDE write the type declaration for you in Java, but all these redundant type declaration sure hurts readability..

More comments

From Tim.