Home
Feedback
FAQ
Getting Started
Discussions
Site operation discussions
Recent Posts
(new topic)
Departments
Courses
Research Papers
Design Docs
Quotations
Genealogical Diagrams
Archives
A compelling description of the features that make CL the king of the Perl-Python-Ruby-PHP-Tcl-Lisp language ;)
Lisp is often promoted as a language preferable over others because it has certain features that are unique, well-integrated, or otherwise useful. What follows is an attempt to highlight a selection of these features of standard Common Lisp, concisely, with appropriate illustrations.
Lisp is often promoted as a language preferable over others because it has certain features that are unique, well-integrated, or otherwise useful.
What follows is an attempt to highlight a selection of these features of standard Common Lisp, concisely, with appropriate illustrations.
Lisp had its hey-day. Lisp Machines and Texas Instruments built machines with Lisp from the assembler up, and it has reached the level at which it is comfortable. If that level makes it king then Lisps true name must be King Canute.
I think most people would agree that the "certain features" mentioned above are all nice things to have, individually. You may not find it exciting to have every one of them interoperating at once with the possibility of cleanly integrating whatever new features might cross your mind (without getting mired in syntax), but I do.
One thing that's often missed in comparisons to Lisp is the sheer industrial strength of some of the language's implementations. People using Perl/Python/Ruby/PHP/Tcl aren't usually (ever?) working with programs that use, say, 60GB to 100GB of RAM on a single machine, as one commercial Common Lisp application I've worked on recently does. (No, it's not airfare-related.)
For applications of that size, the talk about "scalability" of languages becomes quite real. You need an efficient garbage collector that's been tuned to support large memories. It should have an API that lets you exercise some control over memory usage. You really want native code compilation when you're processing such large data sets. Having the ability to do things like dump and restore an image of a running application can be important. Good support for dynamic update of code is very useful, with choices offered when new code conflicts with existing code. The list goes on.
Most of the other languages mentioned aren't even remotely in the same league. Of course, scaling horizontally across machines is one way to avoid the need for such features, but that's not practical for all applications.
...at what point to we distinguish between the language (on paper) from its implementations?
Virtually all of the virtues you mention are qualities of CL implementations, that aren't mandated by the language itself.
You mention scaling horizontally as a means for other languages to work on large datasets. Works great--Google has built an empire on this architecture. For I/O bound problems, it's probably the more natural way to scale. Back when LISP was designed, scaling horizontally was a difficult nut to crack; as computer networking interfaces, protocols were slow, expensive immature, not terribly reliably, and frequently proprietary. And the OS's of the time weren't terribly network-aware; the preferred solution to building bigger systems was building bigger boxes.
Oh, and the standalone RDBMS as we know it today, did not exist. The database management systems of the time were, to put it plainly, lousy. :)
Things like Javascript and Python generally aren't called upon to service the sorts of loads that CL was (and still is); because they are part of a system wherein other components do the heavy lifting. It's not surprising that implementations thereof are not "tuned" to large applications of this sort.
I'm curious. JavaScript is often described as Lisp's natural successor. Both are dynamically-typed strict multiparadigm languages that encourage functional programming (HOFs in particular), don't discourage stateful programming (from a cultural point of view), but which haven't adapted the "everything is an object" world view that Smalltalk descendants like Ruby embrace. The software you are reading this on probably has a JavaScript implementation built in to it--one tuned to the demands of web presentation on a single-user workstation.
Can you think of any reason that JavaScript (or a JS-like language with modules, if you think the recent Harmony summit and compromise is a deal-breaker), with a suitably-designed implementation, couldn't be pressed into similar service as you describe?
(to paraphrase Ballmer.)
I was commenting on languages that currently have no implementations capable of meeting certain needs. Whether such an implementation is theoretically possible is moot, for someone making a practical choice.
Quality of implementation issues are an obstacle that many young languages have faced [edit: and/or academic languages], in some cases preventing them from gaining wider popularity. But it can still be an issue for more established languages.
Yes, which is why I wrote "the sheer industrial strength of some of the language's implementations." ;)
You mention scaling horizontally as a means for other languages to work on large datasets. Works great--Google has built an empire on this architecture.
It works great for certain kinds of applications. If it worked great for all applications, the famous parallelism problem would be solved. (Strictly speaking, it's distribution not parallelism, but there's a close connection in practice - if Google had to wait for all its boxes to execute their code one after the other, and communicate the results from each one to the next, performance wouldn't be so great.)
Since languages with relatively weaker implementations can still scale well horizontally, we can expect to see them do that first. But that still leaves areas un-addressed, where those languages are at a disadvantage.
Purely from a quality of implementation perspective, it could be done, of course. At that point, the language would have to compete on semantics and other such issues. But the fact that it can't do that now is relevant for anyone comparing languages today.
There may also be semantic reasons why one wouldn't really want such a heavy duty implementation for some languages. Perhaps it's just me, but I have difficulty imagining wanting to use, say, PHP in such contexts.
Implementation scalability is not the only thing overlooked (or ignored) by many people using Perl/Python/Ruby/PHP/Tcl. Implementation and library code quality & support is also ignored. To many people the fact that an implementation exists (and is free) and large libraries/frameworks exist (and are also free, even though quality and support could be unknown for both -- *cough* not everything on CPAN really works *cough*) are good enough to indicate that time-to-market will be short, which is what they really care about.
So often the reply from Lispers is "No we don't have that, but it would be easy to do by just ...". Maybe the access to reliable, pre-written, standard libraries is why people choose to not use Lisp, where they so often have to roll their own. (Time and again).
Hopefully, Lisps that target virtual machines (Clojure, for example) will improve situation in this respect.
Just googled clojure and am taking a look. I especially liked the ability to pop up a "hello world" GUI dialog with :
user=> (. javax.swing.JOptionPane (showMessageDialog nil "Hello World")) nil
But I remember that Perl has been used in bio-informatic to process DNA database with some success apparently, and those kind of data are not small usually..
[ And no, I'm not a Perl fan, I hate it in fact ]
I'm sure that many Perl libraries don't scale, but this doesn't mean that Perl is a toy either..
Those Perl applications rely heavily on disk storage, and only process a little of the data at a time. I was referring to systems which use large amounts of RAM, because the data access requirements are such that constantly accessing it from disk would be prohibitively slow.
Of course, there's always more than one way to skin the cat. These days, you might work around the need for a single large in-memory image by using a distributed memory cache. But such solutions add complexity, and can reduce performance compared to a single image, depending on data access patterns.
BTW, Perl in particular has a problem managing complex in-memory object graphs, because of its reference counting garbage collection that (a) is inefficient and scales badly and (b) can't handle circular structures.
Recent comments
27 weeks 1 day ago
27 weeks 2 days ago
27 weeks 2 days ago
49 weeks 3 days ago
1 year 1 week ago
1 year 3 weeks ago
1 year 3 weeks ago
1 year 5 weeks ago
1 year 10 weeks ago
1 year 10 weeks ago