newLisp: A better Lisp/Scheme Fusion...

I had been breathlessly watching Paul Graham's website hoping for news about Arc, his "New Lisp". But I hadn't realized that a group of developers had already beaten him to the punch!

newLisp is an updated (and scaled down) Lisp, targeted at the scripting world. From the web site:

newLISP is a general purpose scripting language for developing Web applications and programs in general and in the domain of Artificial Intelligence (AI) and statistics.

Among its many interesting features (such as useful functions for getting scripting work done, good performance, and small footprint) are:

  • Dynamic and lexical scoping with multiple name spaces
  • OOP extensions
  • TCP/IP and UDP networking functions
  • Perl compatible regular expressions, PCRE
  • Matrix and advanced math functions
  • Financial math functions
  • Statistical functions
  • XML functions and SXML support
  • Tcl/Tk Graphical Fontend
  • Modules for MYSQL, SQLite and ODBC Database access
  • CGI, SMTP, POP3 and FTP Modules
  • Complete documentation in HTML and PDF

While many new scripting languages languish with good implementations, but no fully-realized libraries or interaction with outside software, newLisp seems to have sprung fully-formed, with various useful libraries already implemented.

Newlisp compiles on most LINUX, UNIX versions, CYGWIN, Windows, and presumably Mac OS X. It is licensed under the GNU Public License, GPL

Who knows -- perhaps now Ehud will have a Lisp with which he can finally get some scripting work done!

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

what's so good about newLisp?

Maybe I'm not understanding something, but I don't see what the advantage of newLisp is over Scheme. All of the above listed features are available for Scheme (except dynamic scoping is only kinda sorta available). On top of that, it seems that the cannonical style isn't functional, it's imperative. I can see why you'd want to move from Common Lisp to newLisp (minimalism), but you might be better off using Scheme.

Standard implementation

The key appears to be a standard, cross-platform implementation, one that includes a host of useful libraries. Lisp and Scheme--and Prolog, too--have suffered from there being many differing versions, even though the languages themselves are standardized. Consider that I can write a Python script for Linux and be confident that the script is immediately cross platform, including all libraries (unless I do something that's inherently platform specific, like reference "c:\" or "/usr/home" or whatever). This is a big deal.

Scheme can do that too, sorta

It may be convienent that all Python scripts are cross-platform, but so are simple Scheme scripts. (Also everything written in BrainFuck is completely 100% portable to all BF implementations :). If you want it to always work, just use one implementation; for example, tell everyone that to run your program, they need to get PLT Scheme. PLT alone has many more users than newLisp, and it has more libraries. And if you're telling people to get newLisp to run your program, you're confining them to a single implementation anyway.

newLisp is also a Specification

I agree that PLT is the most complete Scheme release out there, and that it's a bit of a tautology to speak of a language with a single implementation as having a better standard.

However, we are comparing apples and oranges (or perhaps they are both apples, but we are comparing grannie smith and macintoshes...). Scheme (the language) does not specify module system, library components, or other core elements. Consequently, we have a dog's breakfast of competing standards. The advantage of newLisp is that it is aimed at the scripting market, and consequently has documented (specified, if you will) elements that make scriping life easier, such as built-in file and directory traversal, PCRE support, etc. Lots of schemes *have* these, but they are not part of R5RS.

Are there other implementations?

If there are no other implementations of newLisp, then how is it different from any single implementation of Scheme? The fact that there's a core standard for Scheme doesn't in fact detract from the greater functionality of its richer implementations, even if it does create PR confusion.

The response to that is sometimes "yes, but having so many different Schemes fragments the market". But I don't see newLisp helping in that respect.

True

That's a good point, Anton. And not having used NewLisp, you might be exactly right. My impression, however, is that the NewLisp libraries are focused on the kind of thing that you'd use Perl or Python for. Is that true of some reliable and widely used Scheme implementations? I shopped around for a good cross-platform Scheme a few years back, but I wasn't satisfied with what I found. Instead, I've been happy using Perl for throwaway tools and Erlang for problems requiring heavy-lifting.

Scheme for cross-platform scripting

I shopped around for a good cross-platform Scheme a few years back, but I wasn't satisfied with what I found. Instead, I've been happy using Perl for throwaway tools and Erlang for problems requiring heavy-lifting.

My recommendation is to take another look at PLT Scheme. People are often misled by its educational focus, but its IDE is actually very powerful, and its console-based version, MzScheme, makes a great scripting tool, which performs very well. There are many excellent libraries available for things like HTML, XML & RSS processing, SQL access, and the like.

Gordon Weakliem and I used PLT to do much of the conversion of LtU from the old weblogs.com Manila database, which was a classic scripting-language kind of task (a full conversion of the old message database to Drupal was even done with Scheme, but that hasn't been deployed yet). In the interests of credit & full disclosure, Chris Rathman also generated the "/classic" pages with Python, so it wasn't all Scheme.

CPAN is certainly more comprehensive than the libraries available for PLT, so it depends what you're trying to do -- but for the most common sorts of scripting requirements, PLT is a contender. The Scheme Cookbook is intended partly to demonstrate and showcase some of these sorts of applications of Scheme.

BTW, I believe there are other Schemes that qualify, but I don't have much experience with them in the cross-platform scripting context. Scsh and Gauche are both excellent at Unix scripting, but I'm not sure to what degree they support Windows also (other than via Cygwin, which often isn't what's wanted). Java-based Schemes like SISC and JScheme are great scripting tools - benefitting from Java's comprehensive libraries - if you're OK with running such things on the JVM.

Finally, a nice thing about Scheme is that you don't have to switch languages when heavy lifting is required.

How about scsh

scsh is specificially targeted at scripting and includes a wealth of libraries especially for that task. It's still a bit verbose - but that's some "problem" that's with allmost all lisp and scheme implementations, with far too long identifiers. But on the other hand, long identifiers make stuff more readable.

No -- No other implementations. But there could be...

I agree that newLisp does not help with the Scheme balkanization problem. However, it seems to sidestep the issue by saying "we are a scripting-focused lisp." Consequently, it doesn't worry about meeting specific R5RS or SRFI compatibility, while providing the features those documents were designed to encourage.

As you already know, PLT Scheme is my favorite scheme implementation. But that doesn't stop be from being frustrated that code written for Guile or Scheme48 can't generally be used under PLT (or vice-versa) because of the many subtle differences in the libraries, module systems, etc.

From my brief impressions of newLisp, it is in effect saying "Here is a scripting-focused Lisp implementation that has these features, and this module system, and this object system." Anyone who meets that standard can be termed an implementation of "newLisp", just as anyone can implement Perl (assuming they provide the same module support, built-in-functions, etc.).

I don't think the fact that there is only a single implementation of newLisp is compelling, since by that standard, both Erlang and Perl should be considered non-useful. newLisps's only crime is that it seems similar to Scheme and Lisp, and is therefore judged based on how it fits in with existing projects.

but why a new language ?

I don't think the fact that there is only a single implementation of newLisp is compelling, since by that standard, both Erlang and Perl should be considered non-useful. newLisps's only crime is that it seems similar to Scheme and Lisp, and is therefore judged based on how it fits in with existing projects.

The problem is not that there's only one implementation. It's that the same result would be attainable with far less effort if a single cross-platform Scheme implementation was used as a base. Then the developers get a solid language implementation, and Scheme users get scripting libraries; everyone wins. Why scrap it all and start from scratch, if the core language offers no advantages over Scheme ?

Probably because it breaks the expected Scheme/Lisp behavior.

There is a good overview of the differences between newLisp, Scheme, and Common Lisp you can refer to, but here are a couple of reasons I think they created a new language:

  • One reason is that it changes the expected behavior of the lisp evaluator. For example, if newLisp runs across an undefined value, it assumes it is nil. This is not the expected behavior for Scheme and Lisp, but is more in keeping with a scripting language like Perl.
  • Another change is the decision to use dynamic scoping by default, rather than lexical scoping. This is somewhat of a throwback to Lisp of the '50s.
  • Furthermore, the behavior of macros is slightly different. In newLisp they work more like functions, except that their arguments are not evaluated until called. I don't know if this is a good or bad design choice, but it's another core change that could not sit on top of an existing Scheme or Lisp

Back to the Future!

Brent Fulgham: Furthermore, the behavior of macros is slightly different. In newLisp they work more like functions, except that their arguments are not evaluated until called. I don't know if this is a good or bad design choice, but it's another core change that could not sit on top of an existing Scheme or Lisp

SEXPRS and FEXPRS and LEXPRS, oh my!

Why a new language?

My guess? Because the authors don't know how to implement a language properly. NewLISP feels like an accumulation of hack upon hack; when they had something that felt vaguely like a Lisp, they released it and started writing code in it.

I would say this criticism applies to a great extent to Python and Perl as well, though they weren't explicitly aimed at becoming Lisp.

Having a single implementation

I'd say the biggest consequence of NewLisp's divergence isn't just that the language is different (perhaps worse, I don't know) but that it's a language with a single implementation. If it were me I'd be willing to do some violence to the language to get that property.

Comparing an apple to a basket of kumquats

As you already know, PLT Scheme is my favorite scheme implementation. But that doesn't stop be from being frustrated that code written for Guile or Scheme48 can't generally be used under PLT (or vice-versa) because of the many subtle differences in the libraries, module systems, etc.

From my brief impressions of newLisp, it is in effect saying "Here is a scripting-focused Lisp implementation that has these features, and this module system, and this object system." Anyone who meets that standard can be termed an implementation of "newLisp", just as anyone can implement Perl (assuming they provide the same module support, built-in-functions, etc.).

But with newLisp, the only reason you can't be frustrated by lack of portability to other newLisps is that there are no other newLisps! Regarding implementing other newLisps, are there any such efforts in progress? If not, how is that any different than the possibility that someone else could produce a PLT-compatible implementation of Scheme?

As I said, other than from a PR point of view (which is admittedly important), the existence of multiple semi-compatible Scheme implementations in no way detracts technically from the power of any individual implementation. Unless you're saying that the requirements of being a Scheme somehow hamper these implementations, but I don't see that.

newLisps's only crime is that it seems similar to Scheme and Lisp, and is therefore judged based on how it fits in with existing projects.
Based on other comments here, it sounds like it may have committed some more serious semantic crimes. :)

Jeremiah was a bullfrog.

But with newLisp, the only reason you can't be frustrated by lack of portability to other newLisps is that there are no other newLisps!

Well put.

I can't resist:

But in a unityped language, the only reason you can't be frustrated by lack of functions to other types is that there are no other types.

I can't resist: But

I can't resist:

But in a unityped language, the only reason you can't be frustrated by lack of functions to other types is that there are no other types.

Luckily, this doesn't affect latently typed languages! Don't forget that unitypes are merely an implementation mechanism by which statically typed languages can support the implementation or embedding of latently typed languages.

A new language ?

If there are no other implementations, and the language doesn't propose nothing actually new, I don't see much use in it. Why not develop "scripting libraries" for PLT Scheme instead ? It's cross-platform, efficient, and a solid Scheme implementation, with extensions for OO, modules, etc etc.

Convenience?

True -- good "scripting libraries" for PLT would be useful, and a good use of resources. However, how far can you take the library concept?

Example: Read the text of a file into a string:

PLT:

(define (read-all path)

  (let ((size (file-size path)))

    (call-with-input-file path

      (lambda (p)

        (read-string size p)))))



(define file-contents

  (read-all path))

newLisp:

(define file-contents

(read-file path))

Advantage? Not much -- but if I was trying to script something, I'd be a bit happier with the newLisp version, since I wouldn't have to use the PLT recipe for reading an entire text file.


Certainly we could codify these in libraries so PLT could be started with these functions defined. All I'm saying is that newLisp is interesting because its designer made the decision to work this way out of the box.


I'm curious if this will help newLisp's adoption by a wider audience.

PLaneT

PLaneT was designed to combat this problem. I'd be very happy if somebody came up with a "scripting" package that had "read-all" and similar functions all bundled together and submitted it to me for PLaneT inclusion. Then we wouldn't have this sort of problem where there are lots of four-line little functions you've got to define at the top of your file whenever you want to write a little script. We'd just say 'PLT isn't a scripting language, but PLT + (require (planet "scripting")) is!'

Gettin' there!

We'd just say 'PLT isn't a scripting language, but PLT + (require (planet "scripting")) is!'

And it is so getting there! I'm really excited about PLaneT.

Just today I needed a list of some prime numbers (who doesn't from time to time?) so I hopped over to www.prime-numbers.org, used my favorite HTTP-snarfing script (which I'll soon be adding to PLaneT), and thanks to Neil van Dyke's HTMLPrag -- available in PLaneT -- I had the contents in an S-expression with one single application of HTML->SXML! So cool.

Scripting in PLT can indeed be awkward

But there are other Scheme implementations more geared towards convenience, eg:

(call-with-input-file path port->string)

in Gauche.

Scheme community is not unified

Yes, I agree, that makes sense. But one of the big advantages of "one implementation" languages is that they keep the community from fragmenting. The community surrounding Scheme--and also some other languages, like Prolog--is much less unified. Some kinds of libraries can be shared, but not others. You could use Scheme as the basis for standard scripting language, but then the PLT users might be miffed, users of other Scheme implementations might just keep using what they have, and so on.

Community vs. implementation unity

The Scheme example makes me wonder...

I think the Scheme community is quite strong, and unified - even though there are many implementations, and code isn't easily migrated between them.

A single implementation may help prevent community fragmentation, but can't guarantee it. A strong community can help prevent implementation fragmentaton, but can't guarantee it either.

Something that reminds us of Lisp age...

... is the fact that both Scheme and Common Lisp come from a time when everyone had their own inhouse Database Management Systems. There was no Oracle, no MySQL. Hence, both Scheme and Common Lisp have APIs to create and access simple, local single-user database stores, but not to access the many conflicting DBMS in the market today.

newLISP covers that, like all good scripting tools for quick and mundane programming tasks of everyday.

Scheme & Lisp have excellent database support

Hence, both Scheme and Common Lisp have APIs to create and access simple, local single-user database stores, but not to access the many conflicting DBMS in the market today.

This seems to be another example of confusing what a language standard provides, with what implementations offer. All the serious Lisp & Scheme implementations have support for accessing databases, whether via ODBC, direct bindings e.g. through the target database's C API, or whatever. Then there are higher-level interfaces to SQL like SchemeQL, which have no equivalent in the lesser scripting languages.

The fact that the Scheme or Common Lisp standards don't specify these database interfaces isn't relevant to any comparison in this case. Languages like Perl and Python don't even have such standards - their implementations are the standard, and should be compared with similarly rich implementations of Scheme & Lisp.

How about Sockets, then? :-)

Let's take as an example the New 'n Improved Programming Languages Shootout. There is a test of a simple TCP/IP server, which requires a bit of socket programming. Unfortunately, CMUCL and SBCL (which are both offshoots of the original CMUCL) do not share a compatible socket handling library. So, while most of the shootout tests will work on either platform, SBCL requires a unique version of the echo test (which is yet-to-be-written).

Add to that the various competing database access tools (UncommonSQL, MaiSQL, CLSQL) and it's little wonder the average user becomes frustrated. Look at our own beloved PLT Scheme -- we have SchemeQL on top of SrPersist, which works nicely. But we also have lower-level attempts we started on (like the postgres, firebird, and mysql drivers in Schematics). Guile has its own versions, Chicken has its own, etc.

I like to see the diversity and varied approaches, but I'm also a not-quite-right-in-the-head language guy who likes working with weird stuff. I think the average users migrate to Perl and Python because they know there is really one "official" way to do these things, and that their software stands a good chance of working no matter what version of Python they use.

You are right that we should be comparing PLT Scheme and Python, not R5RS Scheme and Python, but I think from a psychological standpoint, to the average user the name "Common Lisp" seems to indicate a standard implementation should be available; likewise, the moniker "Scheme" should mean something, too.

"No-one ever got fired for buying IBM"

You are right that we should be comparing PLT Scheme and Python, not R5RS Scheme and Python, but I think from a psychological standpoint, to the average user the name "Common Lisp" seems to indicate a standard implementation should be available; likewise, the moniker "Scheme" should mean something, too.

People want Microsoft-like homogeneity - they want to pick one primary language which is going to be big and stable and standard enough that, at least in their perception, their learning over time is minimized, their productivity is maximized, there's a large community to benefit from, and they don't have to try too hard to defend their choice (see subject line quote).

There are a lot of reasons why Lisp and Scheme have difficulty meeting these criteria, whether in perception or reality. Most of them apply to newLisp, too, so that aspect of the discussion is moot. ;)

The diversity and varied appr

The diversity and varied approaches don't exist in Perl? Sorry, I don't know what you mean (I don't know Python, so I can't comment on that).

So what is that "official" way to do XML parsing?
Will you use XML::Parser? XML::SAX::Expat? XML::SAX::ExpatXS? XML::LibXML? XML::Xerces? Or anything else?

The reason why CPAN is good is _just because_ it has so many things, not because there is only One True Module. What's bad about competing SQL libraries? What if they solve the problem in different ways, each having advantages and disadvantages, where is the advantage gained by only having one way to do things?

You say that "Common Lisp" seems to indicate a standard implementation should be available, and I'd say that various implementations implement the standard very well. Do you expect the standard to cover everything the language could ever be used for, providing a way to do networking, a way to interface with SQL servers, a way to parse XML, and so on? I find that quite unreasonable.

If you are trying to tell me that one cannot write fairly portable implementations of things using, say, networking features, you'd be wrong. Portable Aserve (a web server written in Common Lisp), for example, lists CMUCL, SBCL, GNU Clisp, OpenMCL, Lispworks, MCL, Scieneer Common Lisp, Corman Common Lisp, and Allegro Common Lisp as implementations it runs on. CL-IRC, a Common Lisp IRC client library , as another example, seems to run on (judged by looking at the source) on CMUCL, SBCL, Allegro Common Lisp, OpenMCL, Lispworks and Armed Bear Common Lisp and Clisp.

These are just examples, though, there are various other programs making use of networking that *don't* just run on one implementation.

Having differing implementations of the same thing doesn't mean that you cannot write portable code, it might just take a little longer than writing unportable code.

Not What I'm Arguing ...

I'm not arguing that CMUCL and SBCL are not valid implementations of ANSI Common Lisp. What I am saying is that the various implementations have not done a great job being consistent with the various "other" aspects of the language (such as network and database access). If you look at the Portable Allegro Server sources, you will find that the developers had to be quite careful to encapsulate the network access behind routines that allowed specification based on the implementation (much like #ifdef's in C).

Furthermore, the various CL's have spent considerable effort creating the ASDF build infrastructure to help cobble together the various Common Lisp libraries into a cohesive whole. PLaneT is a Scheme example of this idea.

The various Standard ML and Haskell implementations are doing an admirable job of staying consistent with each other with respect to libraries and API's.

What I noted was interesting about newLISP is that it provides a standard API for these tasks. This provides newLISP with an efficiency over Scheme that I think provides an advantage in gaining popularity.

What standard?

How can you compare the single-implementation poorly-specified newLISP language to ANSI Common Lisp or Scheme? NewLISP is not a standard by any stretch of the word.

By your argument, CMUCL, PLT Scheme, or any old implementation, can be considered to have a "standard" networking API: its own "standard" API.

One implementation per community

On reflection I don't think the important thing is to have one implementation per language but to have one implementation per community of programmers. That way you can use (and extend) all of the available features and still easily share code with the people you rub shoulders with. This feels very right.

If PLT hackers just do their thing and don't worry about porting to Bigloo etc then I'm happy for them. Things are tougher with Emacs Lisp and Common Lisp where the free software communities don't share the same implementation. Surprisingly it's hard to even deliberately write non-portable free software in these languages: I have tried many times but they mostly ended up growing portability.

I would understand the NewLisp people gratuitously changing their language in order to foster a new and unified community. (I'm not saying that they did that.)

Lexical scoping

Is the term "lexical scoping" preferred now over "static scoping"? I had to look that one up.

Is it really lexical?

I recall looking at newlisp afew months ago, and came to the conclusion that its claim of supporting lexical scoping was a misrepresentation: it does the same trick as Python does, determining scope by means of a lexical tower of dynamic namespaces.

Have I got this right?

capture

I took a quick look at it. It looked to me as if the way you're supposed to avoid variable capture is to use variables with unusual names. Is this really the plan? What about expansions that occur within expansions?

I'm going to be beaten to the punch by another Lisp dialect, can we make it Goo?

newLISP ... a WORSE Lisp

First of all, newLISP is not that new... I remember criticizing it at least over a year ago. Back then it had no "contexts" and arrays were second class entities represented by concatenating symbols with numbers and accessing the value of the new symbol generated. In other words, completely ridiculous.

It still does not have proper lexical scope. Arrays *may* have been fixed, but now the so-called "hash tables" are implemented in that old way. Paul, I would not worry at all about this language "beating you to the punch", it's a backwards language written by people with good intentions but little clue.

Take a look at some of these prize quotes from the manual:

``Memory is automatically deallocated when dereferencing objects.''

(a) The terminology is incorrect (b) They are proud of reference counting, which is dumb.

``Many of newLISP's built-in functions are polymorphic in type''

Yeah, except for some of the most important ones: +,-,*, and /! The numeric types and functions suck.

``Compiled LISPs with type tags trade type polymorphism for the faster performance of native code, but newLISP's speed is comparable to or better than the speed of other interactive programming languages.''

From this comment I deduce that the author of newLISP has NEVER used a real Lisp or Scheme implementation, or if he did, he did not stop to think about what was going on.

``Lambda expressions in newLISP evaluate to themselves and can always be treated as lists''

Have we learned nothing in the last 50 years?

``(cons '() nil) => (nil) ;; would be '() in other LISP dialects''

As I was saying, he never used another Lisp.

...

In any case, you can have fun finding more strange statements in the manual. It's still a joke.

Sounds like Logo

It sounds a lot like Logo, only without the good parts of Logo. Logo is actually a quite good candidate for scripting (e.g. Rebol), too bad they didn't try to take off from there.

Quality of Documentation

I have a tedency to judge implementations by the quality of the documentation. If the quality of the documentation is high, then with high probability the quality of the implementation is also high.

Switching languages (or even implementations) require some effort, and to make that effort as small as possible, one needs to trust the documentation.

Taking some of the fundamental operations often reveal much. As a random example, let's take the entry on begin:

    syntax: (begin body)

    The begin form is used to group a block of
    expressions. The expressions in body are 
    evaluated. The value of the last expression of     
    body is returned.

This isn't too bad - but I do find it somewhat perculiar that they don't mention the order in which the expressions in the body are evaluated.

Another fundemental operator is quote:

    syntax: (quote exp)

    exp is returned without being evaluated, as if
    quoted. Note that before version 6.5.23 quote 
    evaluated exp first. The current new behavior is    
    consistent with Scheme and Common Lisp.

I am glad I already know what quote does.

However I do see that there is a need for making scripting easier in Scheme. Is a "portable" scsh the solution? Is something more radically needed?

Common Lisp can come close, too

In both CLisp on Windows and SBCL on Linux I load at startup:

    CL-PPCRE for regular expressions
    s-xml for XML parsing
    LTK for TK-based GUIs

In SBCL I also use CL-SQL for database access, and of course I use ASDF to load all this in.

That takes care of most of my needs, plus we have the more mature, documented language.

newLISP vs PLT

I admit I'm fairly new to Lisp and Scheme but ...

With PLT, I got tired of wading through the endless documentation to do simple tasks. There were not enough examples in the documentation to figure things out quickly and get programs running. With newLISP on the other hand, I've been able to write all kinds of programs and CGI scripts quickly because I had examples and while the documentation is not perfect, I can still find what I'm looking for.

I don't have time to do everything that a perfect language requires, I just need to get my job done. At least newLISP is a LISP I can use for everyday tasks.

New Lisp - worse but usefull

I looked for a language to write some little program for my friends in a school for dooing tests. I needed work with czech chars, and create aplication for users using mouse in M$ Windows. I find PLT scheme and I liked it taht it is able for example use open/save windows filebrowser. Unhapily it works veeeeery slowly swaping all the time. OK, I said, we will be patient and now I have written the program slow more then corresponds with dificulty of problem. Now I used the feature "create executtable" and i copied my exe result to my computer at home. OH what horror! I need some dll. I copied it allso and then I heve seen I need another one. I installed all the PLT scheme and then... Then I'we seen that I need the streem. I copied the streem allso but it has to be in the same path as in computer I created the executable.

So my friends have to create path c:/documents and settings/... Declaration, that PLT scheme makes executable is clever fable!!!

It is the reason I looked for next language and I hope newLisp DOES NOT LIE.

If fast-startup is your problem...

...take a look at John Harper's underappreciated librep which has bindings for all sorts of gtk+, gnome, UNIXry etc., and is a properly tail-recursive lisp-1, but with a sort of scheme/CL/emacs-lisp procedure set, lexically scoped variables side-by-side with dynamically-scoped variables (which are, it turns out, very inconvenient to do without for emacs-lisp idioms), a scheme-48-ish module system, and a slightly scsh-ish approach to process creation and job control. And it starts like greased lightning. You may have seen it already: it drives the sawfish X-windows/gtk window manager and an emacs-alike editor called jade.

It's also in need of some tender loving care: John added first-class continuations to the language some time back, but all of the storage issues haven't been ironed out yet, and he's not got much time these days to stamp them out. He's open to constructive input: I've been worked at glacier-speed on a port of the SYNTAX-CASE hygienic macro system to librep so that hygiene can be layered upon the native defmac-like macro system in a coherent way. The Gnome bindings aren't up-to-date, and I guess it's not so hard to fix this.

It's a much better than newLisp in terms of fundamentals, take a look.

Since I'm posting this: Luke, have you looked at librep? What do you think?

NB: I, Charles Stewart, uid=918 am the same account-user as cas, uid=1139

rep

I like rep, thanks for asking. I've been using sawfish for about six years and used to write some rep back before my configuration was Perfect (which is more due to direction.jl and pack-window-{up,down,left,right} than anything I wrote myself). Sawfish is a great program.

I don't enjoy programming sawfish nearly as much as Emacs though. My problems are not knowing a good way to debug rep programs and not having good support for finding my way around (no M-. to see source, major lack of docstrings for apropos, scanty reference manual). Dave Pearson's sawfish.el makes it quite fun though.

How about jade?

I actually thought of you because of your ermacs work. Rep is the core language of jade-lisp extracted so that it could be used for sawfish, and the alpha version 4 of jade is refactored (or should that be unfactored) to use librep. I guess jade is the natural home for rep, I've done no real coding for sawfish (it's my window manager, but I just use it as TWM plus the ability to put useful buttons on windows and menus).

Debugging: it's meant to be like C programming (ie. you use gdb). It's not the true lisp way... But isn't debugging emacs-lisp also not very hi-tec? I've not done any nontrivial emacs-lisp work in years...

Pearson's module & other sawfish leads: Thanks!

M- to see source: that would be worthwhile. I'll think about that.

Jade, Ermacs, Emacs

Jade looks rather more successful than Ermacs :-). My experience was that by the time I had the editor complete enough to realistically hack itself with (minus isearch, I'm ashamed to say) I had developed a new respect for GNU Emacs. These days I'm a contented Emacs user and I don't want to replace it anymore. EmacsHasQwan!

But I reserve the right to change my mind if someone does a really great Emacs replacement :-)

What I'd really like is a successor to Emacs with some fundamental new and sexy capabilities - something you could write a Firefox with. That is the overall most important open problem of computing in my opinion.

P.S. jade from CVS warns in autoconf for me and the resulting configure doesn't work:

$ autoconf
configure.in:63: error: possibly undefined macro: AM_PATH_REP
      If this token and others are legitimate, please use m4_pattern_allow.
      See the Autoconf documentation.
configure.in:76: error: possibly undefined macro: AM_PATH_GTK
P.P.S. For some time at Nortel we had no free space on our appliance platform to include Emacs, but I failed to take this great opportunity to put Ermacs to real use! Maybe things would have turned out different :-).

newlisp is cool

because the creator/implementor is a nice person who seems to answer questions diligently. He also makes frequent updates to the language and implementation. It makes a big difference to a new language. I think newlisp is very usable and certainly much more fun to use than other LISPs (especially Common LISPs, except perhaps CLISP implementation). And like the small size.

Hilarious.

Especially the people who seem to be under the impression that pass-by-deep-copy is a subtitute for pass-by-reference with compiler-enforced immutability of the referenced data.

pass-by-reference

Nobody thinks that pass-by-deep-copy is a substitute for pass-by-reference. newlisp is not pass-by-copy only. It allows for by-by-reference using contexts. Once you start hacking in newlisp, you realize that it is better that way. Many things are like that in newlisp. First you think WTF? After you use it for a while you realize, oh yeah... What it takes is this -- take off your Computer Science hat for a while and just hack some little programs with it. For example, at first I thought weird things about newlisp way of using processes to do "thread"ing off an expression and using shared memory. Then I actually played around with it a little and voila! It is one of the best ways to enable use of UNIX shared memory facilities, which is rarely used anymore by application programs unfortunately, mostly due to error prone and cryptic interface. With newlisp things just work and in an intuitively easy way. All those people who worked so hard to make SYS-V style semaphores and shared memory work efficiently in Unix kernels, rejoice!

Not convinced.

Computer Science is what brings us advances in algorithm theory, type theory, etc. etc. so it would be silly to disregard the basic priciples it teaches us. Mutable shared state (aka. shared memory) is a horrible, horrible way of doing concurrency -- some of the complexity/error-proneness can be kept in check with something like Haskell's Software Transactional Memory, but even that has its problems. Others have written extensively on the horrors of mutable shared state, so I'll refrain from blathering on. If you want to see some example of the better ways of doing concurrency you need look no further than Erlang, AliceML, Acute, or CSP (if you're more "mathematically" inclined).

That's it from me in this thread.

off topic

Keep your CS hat on... and refuse to use anything that does not conform to your view of what is proper in CS theories. With that attitude, many of the systems that make the world go around will not exist. Shared memory may be error prone, but it is one of the most efficient ways of IPC. When used with proper care in the proper context, it gives you what you need. There are robots crawling on Mars that use shared memory IPC.

Shared memory

At a low implementation level shared memory is great, but it is horrendous as a programmer-level abstraction. That is, it's perfectly fine as a tool to implement efficient (as in zero-copy) higher-level abstractions like message-passing or transactional memories but other than that I don't want to use it directly 99.9% of the time. (By the way, communication via UNIX-style pipes under Linux has been zero-copy for quite some time now.)

That said, there are cases where you need it if you want to squeeze out that last ounce of performance. For instance, enterprise-scale RDBMS software like Oracle use shared memory and fine-grained locking to maximize concurrency and performance. But these kinds of ultra-high performance apps would never be written in a languages like newLISP in the first place.

opinions

Shared memory isn't hard when used properly. It isn't "low level" either. For example, when used uni-directionally, it is no harder than pipes. Shared memory isn't only for efficiency either. It allows for simpler abstractions than sending chunks of buffers via message-passing or other convoluted API. (By the way, calling UNIX-style pipes under Linux zero-copy is one thing, but it won't be zero-copy when used in a program. Write to pipe, read from pipe, resulting in user / kernel copies. Not exactly zero is it?)

Coupling discussion of shared memory with huge RDBMS is new to me. I stay away from those things. My use of shared memory in the past has been in real-time control and embedded systems, where it is used widely, even in safety critical systems.

I was not advocating newLISP for shared memory usage per se. I noted that newLISP had a novel feature uncommon in other programming systems I have used in the past. Convinced or not, I find these features wisely chosen and implemented, based on real day to day programming compromises and working patterns. As to your assertion that one would never write ultra-high performance apps in newLISP, I have learned to not say absolutist statements like that over the years. I also find it interesting in a way that people who are so concerned about the aspects of pass-by-reference vs. pass-by-value of cells in a novel LISP style language are at the same time so biased against the features of the language that allows extremely efficient control of huge amount of data, not just parameters.

newLISP and shared memory

It is one of the best ways to enable use of UNIX shared memory facilities

Really?

It shares a page (of a system-dependent size) at a time. You can put there a single int, a float, a string, or a boolean. In the other process you can read it at any time, but there is no attempt to guarantee data integrity: processes must synchronize by other means to avoid reading a partially written value, to know when a new value has been put, or to know when the receiving process has already received the data so the space can be reused for the next value.

There is no checking whether the string will fit in the space. If it doesn't, it will silently overwrite whatever memory is lying after the mapped page. When such string is received, it will similarly pull data from whatever memory was lying after the shared page in the receiving process.

On Linux only you can pass several values through the same share by performing pointer arithmetic on the share handle (which is merely the pointer cast to 32 bit integer; this is Linux-specific because on Windows the handle is a Windows handle which obviously doesn't support pointer arithmetic). You have to manually compute offsets of other values, basing on how many bytes particular values will take. The documentation explains how much particular values take, silently assuming a 32-bit architecture. Anyway only 32-bit architectures are supported by newLISP at all.

There is no way to release a share.

“One of the best ways to enable use of UNIX shared memory facilities”? This is the most ridiculous interface to shared memory I've seen. And you are surprised why do I bash newLISP...

incorrect information

It is not true there is "no checking whether string will fit in the space".

Page sizes are system dependent obviously, but the page sizes can be determined at runtime after 'share'.

(length (share (share) (dup " " 1000000)))

On windows and linux running on intel 386 architecture, it'll be like 4K. You can store a large string into it which can contain a lot of different kinds of data.

It is not true "it will silently overwrite whatever memory is lying after the mapped page". This shows your lack of understanding about how most unix kernels implement mmap.

It is not true that "there is no attempt to guarantee data integrity". Semaphores can be used for synchronization and protection of the shared resource.

It is not true that "there is no way to release a share". In general, it is incorrect to say that "there is no way to XYZ" in newLISP. Because newLISP can easily call any function in any C library.

(import "/usr/lib/libc.so" "munmap")

I am not surprised that you bash newLISP. Many people feel very free to bash whatever they want on Internet. If newLISP interface to shared memory is the most ridiculous you have seen, you haven't seen much. There are other systems I have seen that do much worse.

newLISP and shared memory

Oops, I missed trimming the string size to fit the buffer in the source, sorry.

Page sizes are system dependent obviously, but the page sizes can be determined at runtime after 'share'.

But why should there be an arbitrary limit at all? Let's say that I have data I want to pass to another process. It should be the library's job to make buffers large enough to fit my data, not my job to only send data smaller than some limit imposed by the library, after which strings are silently truncated. Especially as the underlying OS interface is not limited to page size.

It is not true that "there is no attempt to guarantee data integrity". Semaphores can be used for synchronization and protection of the shared resource.

It's the programmer who is responsible for synchronizing access here. Since the interface is based on sending and receiving messages (rather than providing shared access to an existing array), pipes would seem more appropriate most of the time; they synchronize automatically, and they don't impose a fixed size limit.

(import "/usr/lib/libc.so" "munmap")

I would expect a documented interface to release for whatever resources there is an interface to obtain, and to be as portable (what about Windows?).

final contribution

There isn't arbitrary limit. The page sizes are not arbitrary. It is decided by the kernel and MMU architecture. mmap interfaces are sensitive to page boundary conditions. newLISP share feature exposes a simple interface for novices. If one wishes to exploit all aspects of underlying mmap API, one can, within newLISP.

You are confusing the language features and library features. Libraries can be built on top of a given language like newLISP to give you a very error resilient and safe and easy API to other programmers.

The programmer who write such library will be responsible for using semaphores. Why is semaphore so hard? It is not. Pipes are easier to use but so are message queues. I have seen systems that implement pipes and message queues using shared memory and semaphores. VxWorks used to do that in the kernel. If you are arguing pipes are better and easier for many things, I agree. But you are digressing.

I don't expect a language to be perfect. I expect a language to be useful and not get in the way of hacking. For me newLISP does that. For you, perhaps not.

Good luck with your Kogut.

Anyone has been following newLISP?

I wonder if someone here has been following it, and has more to say than what came up when this thread started. Is it quite usable? Is it fast enough for a scripting language? How convenient contexts are?

Following newlisp?

..I would say, start using it..instead of talking about it..

From what I can tell you is that newLisp is currently used
in some heavy redundant production environments (running
on solaris, tru64 and linux) worldwide...
doing a great and steady reliable job...!

And..The support (and community) on newlisp is great!