Yearning for a practical scheme

I have spent the last year learning Common Lisp and then Scheme. Especially after watching most of the SICP videos, I am completely enamored with Scheme. It was a godsend for my natural language course last semester in which I needed my only constraint to be my imagination rather than my programming language. I am addicted. But I am also in crisis, because Scheme feels both a blessing and a curse. It's a blessing for the obvious reasons, but it's a curse because it doesn't feel like Scheme is everything I want it to be and I don't exactly know why. It feels impractical. I have some (questions about / comments on) on my experience.

First, my Scheme code feels sloppy, much more so than the mind-numbing Java I write for my research job. Is it just all of Java's boilerplateism that's giving it the illusion of better organization? Sure there's more code, but I never have to think twice about where to put that code. Maybe it's Java's rigid design that's forcing me to be more organized when I use it. In imperative languages, the creation of new procedures often feels very arbitrary. It took experience for me to learn when to excise a handful of imperative operations and move them to their own procedure. In Scheme this feels more natural... every procedure seems to map well to the conceptual operation it performs. I think this has a lot to do with functional programming's emphasis on the "computation as calculation" perspective. Because imperative languages highlight the computer's role as a storage device for program state, perhaps they don't lend well to the conceptual breakdown of code into procedures. But is there something about the view of computation as "manipulation of storage" that lends to a better conceptual breakdown when it comes to collections of related procedures? In my Java programs I always feel like there's a place for everything and everything is in its place. My Scheme programs (though I haven't written THAT many full blown programs) always end up being huge listings of procedures grouped into a single massive file... This probably has as much to do with inexperience as it does with language properties, but it also seems true that in Java I have to think about this sort of thing less. What are good organization techniques? Does anyone have advice / guidelines on breaking Scheme programs into compilation units?

Coming from the static world of C++ and Java, one of the most seductive parts of Lisp early-on was its complete type dynamism. Having the straitjacket removed felt wonderful, but as I gain more experience, I'm starting to miss the straitjacket. I know that Haskell and Ocaml are statically typed, and I've ventured a little into Haskell, but I still really like Lisp. It feels more "low-level". I like being able to use assignment if I feel like it... it seems to me that it's convenient for the human mind to model "computational objects" as little pools of shared mutable state. It's a shorthand we use also use in dealing with the real world. Even though everything is interconnected, at least at the atomic level, meaning there is no such thing as truly isolated state, we nevertheless think of the world in terms of objects. Without assignment, it seems harder to section off abstractions from each another. I guess the Haskell crowd could achieve this with monads. I'd be interested in hearing people's thoughts. Maybe this will change as I learn more about Haskell, but I still don't like it as much as I like Scheme... I don't completely know why, but I think Scheme's lack of syntax has a lot to do with it. Scheme programs look pretty and nested on the page, while Haskell looks more like a math textbook... cool in its own right, but I prefer the former as of yet... So here is my question: would it be possible to give Scheme static typing while maintaining its appeal? Has this already been done? If not, why? I haven't thought about it at length, and maybe Scheme's elegant syntax would be crushed by type annotations, but I still wonder. Why not?

Lastly, what do people think about implementations? I keep shopping around for one that I like, and I keep coming up empty. PLT falls short because it depends too much on DrScheme, and I want to use Emacs. I don't think I should have to abandon Emacs just to gain access to the stepper and interactive debugging. MzScheme's text mode debugging support seems paltry. I understand the appeal of Scheme's minimal definition, but when I'm using it for practical purposes, I don't want the API to feel tacked on. I don't want to import an object system. I want it to be there and feel like it's built-in. I don't want to load a SRFI number, I want the Scheme equivalent of saying import java.util.foo with a name that maps to functionality. I guess what I want is a Scheme that reminds me of Common Lisp while not being as ugly as Common Lisp is. I need a full-fledged toolbox, and I need it to feel seamlessly integrated. Does anyone have suggestions? If not, why doesn't such a Scheme exist?

If you read this far, thanks for your advice / comments. I have been reading and learning from this community for a long time and now I am finally a member.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

SLIME48

You could have a look at Taylor Campbell's recent SLIME48 hack, which allows SLIME to be used with Scheme48.

Maybe You're Looking for Smalltalk

I don't mean to be facetious, but maybe you're looking for Smalltalk rather than Scheme.

It's not taken facetiously...

It's not taken facetiously... I am somewhat intrigued by Smalltalk from what little I know about it, so maybe I'll check it out. But I was always under the impression that it was firmly in the imperative paradigm. I feel like Scheme does everything right except for the fact that it seems to encourage fragmentation into lots of tiny pieces rather than a coherent, convenient system. Thanks for the suggestion. Thanks also to the pointer to the SLIME Scheme 48 hack. Cool.

Smalltalk's extreme use of bl

Smalltalk's extreme use of blocks (closures) makes it seem to me more functional than Lisp/Scheme in its way, for some reasonably meaningful value of "functional" (the higher-order programming style).

I think it will be a lot of fun when I find a chance to really hack some Squeak.

the problem with squeak is th

the problem with squeak is the mickeymouse-themed virtual desktop.

Not the most egregious problem

Although the most immediately obvious problem is that the UI is designed to appeal to five-year-olds, more problematic is that (as of some 12 months ago) the UI can only be created within the squeak window, as no-one seems to have created any bindings for other windowing systems, the recommended graphical toolkit has no up-to-date documentation, and the editor available is relatively primitive.

It might be possible to learn to use squeak in the context of a non-gui application, such as Seaside, though. I've never used seaside. As an aside, are there any good examples of apps made in seaside, with code available? The documentation and tutorials also don't make it clear how much is already done for the developer, for example in creating AJAX components.

All these reasons...

Are why I'm building up Slate: UI architecture flexibility, good documentation, friendly to the command line, evolved libraries, and so on. Basically I'm taking all the good ideas in Smalltalk and Lisp systems and melding them.

programmers are 5 year olds

nerds, geeks, and what have you, the source of postmodern hipsterism are 5 year olds via nostalgia. Who is not enamoured of cute cartoon figures and the like? I think it will be very difficult to find a programmer who will sniff at oswald the lucky rabbit and say kid's stuff instead of being captured by the pop culture buzz.

The problem with the Squeak UI is they should have tried more consciously to ape a class cartoon style. something like the old fleischer studios would make a great desktop theme I think.

Other windowing bindings

There are some bindings to wxWidgets (formerly wxWindows) at wxSqueak.

To go along with this is Morphic for wxSqueak.

I believe there are some preliminary bindings to Mac OS X's GUI as well.

There has also been significant work on a replacement for Morphic: Tweak (Morphic is the UI "designed to appeal to five-year-olds" that "has no up-to-date documentation"). Tweak also relies on extensions ("method annotations" I think) to the base language.

Squeak -- bah! Try Ruby :)

Squeak is a wonderful tool/environment/Smalltalk implementation. However, its GUI is so orthogonal to every other desktop I've ever used that I've never been able to do anything with it. I'm so used to text editors and code, right-clicking to bring up a context sensitive menu, Windows, KDE, vim, etc. that the whole Squeak desktop is like parachuting into a place where nobody speaks English with only $1.25.

Now I'm sure if I stuck with Squeak, I'd at least be able to function there, and there is always the risk that I'd like it so much I'd be unable to go back to my old ways. :) However, I've recently discovered (perhaps acknowledged is a better word) Ruby, and that's where I'm going.

Ruby is touted as combining the best of Java, Smalltalk, Perl and Lisp, and features closures prominently. It's object-oriented (classes/attributes/methods) rather than functional. And it's a big deal because the "agile" development crowd is moving to Ruby as a replacement for the "traditional" scripting languages Perl, PHP and Python for rapid development of web applications. If you've been under the proverbial rock, do a search for Ruby on Rails. :)

"mainstream" Smalltalks

NB These days the "mainstream" Smalltalk implementations are free for non-commercial use.

Squeak is an interesting implementation but sometimes there's an incorrect impression that Smalltalk is Squeak.

scheme

One language i truly love, given its minimalist syntax, very small core API, advanced concepts and flexibility like no other. I also like very much the parenthised syntax, since it makes it very easy to navigate throughout the code when using a decent editor like emacs.

Still, i truly miss that its specification only deals with the language itself, refusing to enforce some minimal ways for dealing with the outside world. I find it appalling that to this day and age, the only way to have any connection with the outside world, to interoperate with other systems, is via files. I want a high-level socket interfaces and perhaps some FFI wrapper. is it really asking too much.

As for organization of code, perhaps you should see modules? Unfortunately, it's not specified either, though that's about to change. Every major implementation has its own module system.

as for your comments...

"It feels impractical."

yes. absolutely. IPC now!

"my Scheme code feels sloppy"

you're a novice. you'll get better, eventually.

"My Scheme programs always end up being huge listings of procedures grouped into a single massive file"

modules. that's all. topicalization.

"type annotations, but I still wonder. Why not?"

i don't miss it. if i did, i'd be using haskell or ocaml more often.

"when I'm using it for practical purposes, I don't want the API to feel tacked on. I don't want to import an object system. I want it to be there and feel like it's built-in. ... I want the Scheme equivalent of saying import java.util.foo"

I don't think the object-systems feel tacked on. i think they feel the same as any code libraries. And they show OO is nothing special, just another useful abstraction brought into scheme given its flexible nature. They don't feel any less builtin then car or cdr, since they are all just procedures.

"I don't want to load a SRFI number, I want the Scheme equivalent of saying import java.util.foo with a name that maps to functionality."

modules give you that. SRFIs don't use a module system, because it's not a standard, but that's about to change in R6RS.

"why doesn't such a Scheme exist?"

Tt was often noted that the Lisp community is very fragmented. I wish there were some common agreed on interfaces on practical everyday programming needs. It doesn't even have to be cross-platform libs of some sort, just more like, say, Python's PEPs. For instance, both mysqldb and pygresql modules for database access provide the very same interface described in a PEP, despite their teams propably never getting on together.

that would be good for scheme.

Modules

I suspect you're on to something here.

One really serious difference between Java (and C++, conceptually) is in modularity. With Java, everything almost has exactly one place it can be, once you at least partially grok Java's version of OO design. Scheme, and many other languages, don't enforce those kinds of limits, and if most of one's experience is with Java or C++, that lack of enforcement would feel sloppy. On the other hand, if it did enforce something that didn't match Java, it would feel restrictive.

How much time has been spent on learning good OO Java design? Hopefully, subsequently learning good Scheme design would be easier, but the work can't totally be avoided. Until someone has a feel for what a good design looks like in an environment, it's hard to know how to organize code. And what defines a good design interacts strongly with the module system.

i've had the same experience

i've had the same experience with both ocaml and scheme. what helped me was deciding that i would program in a functional manner, and focus on the appropriate language constructs. otherwise, i got sidetracked into solving the same problem in different ways.
(credit to paul s for suggesting this, iirc)

comments on Scheme

So here is my question: would it be possible to give Scheme static typing while maintaining its appeal? Has this already been done? If not, why? I haven't thought about it at length, and maybe Scheme's elegant syntax would be crushed by type annotations, but I still wonder. Why not?

Yes, there's a lot of literature on that. See:

  • Cormac Flanagan, Matthew Flatt, Shriram Krishnamurthi, Stephanie Weirich, Matthias Felleisen. "Catching Bugs in the Web of Program Invariants." PLDI '96. Available online: link.
  • Cormac Flanagan and Matthias Felleisen. "Componential Set-Based Analysis." TOPLAS '99. Available online: ps.gz.
  • Philippe Meunier, Robert B. Findler, Paul A. Steckler, Mitchell Wand.
    "Selectors Make Set-Based Analysis Too Hard." HOSC '05. Available online: pdf.

The first and second describe the brains behind MrSpidey, DrScheme's first static type-checking system; the third is the brains behind MrFlow, the newer one. Unfortunately at the moment MrSpidey has bitrotted away and MrFlow isn't quite ready for prime-time (not because it doesn't work well, but only because it doesn't handle full PLT Scheme, only R5RS + some PLT extensions, if I remember correctly). Both of these systems are so-called "soft" type systems, meaning you can run them over your Scheme program and it'll tell you about any type errors it finds, but you're allowed to run the program anyway if you're sure those type errors won't actually lead to those programs crashing.

Another approach is to use "contracts" rather than types, meaning annotations on functions that specify type-like things but are actually checked dynamically and can assign blame for failure on a particular component of the system. I think this is considered more Schemely by a lot of people, and in any event a full contract system is implemented in PLT Scheme and is used fairly heavily in PLT code.

As for implementations, I've got to say that PLT is a good choice, and you can trust me to be totally unbiased since I'm an active developer of the PLT system :). I like DrScheme and I think some of its features are worth putting up with its deficiencies with respect to emacs, but if you don't you can always use Niel van Dyke's Quack package to give yourself a PLT Scheme editing mode in emacs.

I don't want the API to feel tacked on. I don't want to import an object system. I want it to be there and feel like it's built-in.

PLT's object system feels quite "built-in" to me even though I've got to say (require (lib "class.ss")) to get at it, considering that the entire graphics system is implemented in it. I think this objection would go away very quickly if you started using the system in earnest.

I don't want to load a SRFI number, I want the Scheme equivalent of saying import java.util.foo with a name that maps to functionality.

PLT Scheme actually allows you to refer to SRFIs by names rather than numbers (e.g. (require (lib "list.ss" "srfi")) rather than (require (lib "1.ss" "srfi"))), though I would discourage that practice; the names are made up by us whereas the numbers are standard.

In any event, if you find yourself always loading particular libraries, it's easy enough to build your own language that automatically imports those things and either (1) use it as the base language for modules you write or (2) make it a language level for DrScheme.

Thanks for your advice... I h

Thanks for your advice... I had seen MrSpidey when I was just getting started learning Scheme (with PLT) but I didn't really understand what it did back then. I like the idea of a "soft" type system... does MrFlow accept type annotations like Haskell?

But as far as DrScheme, I can't deal with it. I tried. I don't like it. And then I was frustrated that so many of the things I wanted to do (such as debug) required it. If PLT Scheme had better separation of language features from IDE or turned DrScheme into a super-cool Emacs replacement with Scheme scripting in place of warty Elisp I'd probably go back, but as of now I'm looking at Gauche and Chicken. Not meaning this to be a rant or anything... I'm certainly not programming any Scheme implementations just yet (although after I finish EOPL, maybe?). This is more just feedback.

soft typing, debugging

does MrFlow accept type annotations like Haskell?

Not at the moment, but I believe there's active development on making it understand the type-implications of user-defined contracts. Under that system, if you gave a function the contract (integer -> integer), MrFlow would understand that, and try to prove away the necessity for inserting any contract checks in the generated code or find a circumstance where it appears that the function is used in a way that's not consistent with the contract.

I was frustrated that so many of the things I wanted to do (such as debug) required [DrScheme].

Actually that's not true, though that may not be very well known. mzscheme -M errortrace [other mzscheme arguments afterwards ...] gives you the same error tracing facilities that DrScheme uses. In general you only need DrScheme for things that don't really make sense without a GUI, like the check-syntax binding-occurence/bound-occurence arrows.

Some thoughts

First of all, if you have any API dependencies on Java, your best bet is SISC: http://sisc.sf.net

SISC is fully R5RS compliant, supports the full numeric tower, tail-call optimisation, all of that beautiful stuff. It has an *extensive* support for bringing Java objects and methods into the Scheme world and though it's somewhat more cumbersome to use than cute little hacks like JScheme's JavaDot, too much (syntactic) sugar is bad for you.

Secondly, everyone's Scheme code is sloppy at first, and it tends to be sloppy in certain standard ways, unless you're like someone I know whose first exposure to programming was taking the SICP course at MIT. This is because we're all used to the standard imperative model: allocating space for variables, changing their values, etc. Scheme is like a whole different paradigm. While the ability to mutate stuff is convenient in some cases, as someone once told me a good Schemer should cringe whenever he types an exclamation point. Or, less radically, when programming Scheme you should learn to economise your exclamation points. Thinking functionally helps you understand your programs a whole lot more clearly, including when and how to bend the rules.

Your initial difficulties with where to put the code were something I experienced also, in a different language: Smalltalk. I had gotten used to thinking functionally that when it came time to encapsulate things into objects, the lines became fuzzy on which objects should have which methods. So I just practiced and things got a little clearer with time.

All this will come with time and practice, so keep Scheming.

indeed

I'm in basically the same boat. Currently I'm using Chicken Scheme, which I can deal with, but, uhh... Well, it can be frustrating sometimes.

I love Scheme and am using it successfully, but I strongly believe that ultimately dynamic typing is very much the wrong solution. All I can say is wait a few years. As soon as I make enough money that I can afford to spend significant amounts of coding time on something not likely to make me any money, I plan on designing and implementing a Lispy/Schemey but statically typed language with type inference, variant types, pattern matching, a cool system for accomplishing what OO accomplishes, modules, an interface to C, standard interfaces for sockets & threads & GUIs, etc... and a text editor like Emacs but using this language rather than elisp. Say what? You can't afford to wait around for years in hopes that some random guy from the web creates a great language? Oh...

Why not actually just write a

Why not actually just write a static type-checker for scheme or lisp?

well

Because I don't want Lisp with static type checking. I want a new language (with some inspiration from Lisp).

Three Syllables, Sounds Like...

mike: I plan on designing and implementing a Lispy/Schemey but statically typed language with type inference, variant types, pattern matching, a cool system for accomplishing what OO accomplishes, modules, an interface to C, standard interfaces for sockets & threads & GUIs, etc... and a text editor like Emacs but using this language rather than elisp.

http://caml.inria.fr
Efuns

Let's go through the list:

  • "Lispy/Schemey:" Check. O'Caml, like Lisp, is an impure functional language. Also like Lisp, it has extremely powerful syntactic extension mechanisms.
  • "Statically typed with type inference:" Check. Like all members of the ML family.
  • "Variant types:" Check. In two flavors, traditional and polymorphic.
  • "Pattern matching:" Check. Again, like all members of the ML family.
  • "A cool system for accomplishing what OO accomplishes:" Check. O'Caml includes an object system with either classes or immediate construction, multiple inheritance, and a clear distinction between inheritance and subtyping. O'Caml remains, as of this writing, the only type-inferred statically-typed object/functional language to have escaped the lab (cf. O'Haskell).
  • "Modules:" Check. Again, very much like the rest of the ML family. Modules, functors, and recursive modules are all there.
  • "An interface to C:" Check. Actually, there are several.
  • "Standard interfaces for sockets & threads & GUIs, etc:" Check, check, check. Sockets and threads are in the standard libraries; Tk support is in the standard libraries and bindings to Gtk+, COM, Win32... are available from third parties.
  • "A text editor like Emacs but using this language rather than elisp:" Check. That would be Efuns, linked above.
Let me also mention that O'Caml has an interactive toplevel, a bytecode compiler with a time-travel debugger, and a native-code compiler with a profiler. Generally speaking, O'Caml is an excellent performer, i.e. native code comes within epsilon of the equivalent C++ code. As Graydon Hoare says in this slide:
Q: Does Ocaml have:
  • a debugger, a profiler, lex, and yacc?
  • array, for loops, printf, and fast I/O?
  • generics, modules, and typesafety?
  • hashtables, objects, and garbage collection?
  • map, fold, lambda, cons, and quote?
  • an emacs mode and a free, portable compiler?
A: Yup.

Q: Why am I still coding in C/C++/Java/Lisp?

A: Beats me.

Wow

You've really sold me on O'Caml there. Are there any catches I should know about?

Can you really do quoting and evaluating of syntax like you can in Scheme/LISP? I'm curious as to how that could work without S-expressions... I guess I'll go have a look.

Great PL with an archaic syntax

It's feature complete. There is some stuff I don't like:

  1. Pretty archaic syntax (especially when compared to Haskell), postfix polymorphic type declarations, verbose syntax for named structures, odd syntax for OO constructs, odd syntax for variables, pattern matching needs a lots and lots of superfluous parentheses.
  2. Occasional type system oddities. (Sometimes a declared type of a construct is more general than the actual inferred type)
  3. Native integer types are tagged 31 bits so you get occasional quirks when binding with native C libraries.
  4. Currying functions is supported, but sometimes explicit eta-expansions need to be added.
  5. Lacks Haskell's type classes, but this is solvable.

Ocaml feels a lot like ML but with features hacked on top of it at the expense of the syntax. That's no biggy though, there are lots of languages with a lousier syntax.

indeed

i too don't dig OCaml's archaic syntax. ;;?? # for attribute access?? ; for list items separator??

it is verbose and odd without much reason. it should be much sweeter for C++ converts if it used the C++ equivalents...

Good Points

There's a helpful (to me, anyway) little chart comparing some snippets in Standard ML and O'Caml here. Graydon Hoare's One-Day Compilers slides also have an "O'Caml phrasebook," showing some O'Caml code side-by-side with either C++ code or Common Lisp code, beginning here.

With respect to native integers, while it's true that "int" is a tagged 31-bit value on 32-bit platforms, we should point out that O'Caml also has an "int32" type with its associated functions in the Int32 module. For most purposes, just doing "open Int32" will be sufficient. Also remember that appending "l" to constants will make them int32:

# 3l;;
- : int32 = 3l
and that printf also supports the int32 type:
# open Printf;;
# printf "%lX\n" (-1641380927l);;
9E2A83C1
- : unit = ()
The code to write an int32 to an out_channel in little-endian order looks like this:
(** This function writes the int32 passed to the output channel passed in little-endian order.
    @author Paul Snively
    @param out_chan out_channel
    @param n int32
    @return unit *)
let output_int32_le out_chan n =
  let aux n =
    let byte_mask = of_int 0xff in
    let char_of_int32 x = Char.chr (to_int x) in
    let d0 = char_of_int32 (logand n byte_mask) in
    let d1 = char_of_int32 (logand (shift_right_logical n 8) byte_mask) in
    let d2 = char_of_int32 (logand (shift_right_logical n 16) byte_mask) in
    let d3 = char_of_int32 (logand (shift_right_logical n 24) byte_mask) in
    let little_endian = String.make 4 d3 in
      little_endian.[2] <- d2;
      little_endian.[1] <- d1;
      little_endian.[0] <- d0;
      little_endian in
    output_string out_chan (aux n)
and the code to input a little-endian int32 from an in_channel looks like this:
(** This function reads an int32 that is encoded in little-endian order from the passed in_channel.
    @author Paul Snively
    @param in_chan in_channel
    @return int32 *)
let input_int32_le in_chan =
  let aux le =
    if String.length le <> 4 then 
      raise (Invalid_argument "int32_from_little_endian");
    let d3 = of_int (Char.code le.[0])
    and d2 = of_int (Char.code le.[1]) 
    and d1 = of_int (Char.code le.[2]) 
    and d0 = of_int (Char.code le.[3]) in
      (logor (shift_left d0 24) 
         (logor (shift_left d1 16) 
            (logor (shift_left d2 8) d3))) in
  let buf = String.create 4 in
    really_input in_chan buf 0 4;
    aux buf
Both of these functions assume an "open Int32" has brought various functions on int32s or returning int32s (of_int, shift_left, logor, etc.) into scope.

Also, O'Caml is explicitly designed to support high-performance scientific computing: records and arrays consisting entirely of floats store their values unboxed, and the Bigarray module supports multidimensional unboxed arrays of ints or floats laid out in either C or FORTRAN style so that they can be passed directly to existing C or FORTRAN code, a fact that the bindings to popular libraries such as BLAS and LAPACK exploit.

The lack of type classes in the ML family is, as I think you alluded to, often dealt with by the use of phantom types. In O'Caml, polymorphic variants are frequently employed as an alternative to phantom types.

Phantom types

The lack of type classes in the ML family is, as I think you alluded to, often dealt with by the use of phantom types.

How does this work? Could you post an example or a link?

Some Examples...

... of phantom types and polymorphic variants in O'Caml can be found here. Please let me know if you need more.

Type classes

Hmm.. I guess I was looking for an example of encoding Haskell-style type classes using sml or ocaml phantom types. Maybe something simple from the Haskell prelude?

The generizations of phantom types (e.g. First-class phantom types or Guarded Recursive Datatype Constructors) all seem to require extra typechecking machinery, which I don't know how to simulate.

As long as y'all are on the subject of quoting...

Spend more time translating CTM, but I'm in the middle of section 2.2 for SICP. Section 2.3 has been on the back of my mind though, and since it has to do with quoting, so perhaps I should be an opportunist and take advantage of the current line of thought.


From 2.3.1, SICP gives the following Quotation example:

   (define a 1)
   (define b 2)
   (list a b)
   (1 2)
   (list 'a 'b)
   (a b)


So what's the equivalent in an ML language?

   val a = 1
   val b = 1;
   [a, b];
   ???


Thanks.

And I'll try not to ask too many more of these language specific types of questions here, lest Ehud loses patience. :-)

Good Question!

Since you're working with a list, I will too.

Valhalla:~ psnively$ ocaml
        Objective Caml version 3.09.2

# #load "camlp4o.cma";;
        Camlp4 Parsing version 3.09.2

# #load "q_MLast.cmo";;
# let _loc = Lexing.dummy_pos, Lexing.dummy_pos;; (* Dummy source location *)
val _loc : Lexing.position * Lexing.position =
  ({Lexing.pos_fname = ""; Lexing.pos_lnum = 0; Lexing.pos_bol = 0;
    Lexing.pos_cnum = -1},
   {Lexing.pos_fname = ""; Lexing.pos_lnum = 0; Lexing.pos_bol = 0;
    Lexing.pos_cnum = -1})
# <:expr< [a; b] >>;; (* Quote the expression "[a; b]" *)
- : MLast.expr =
MLast.ExApp
 (({Lexing.pos_fname = ""; Lexing.pos_lnum = 0; Lexing.pos_bol = 0;
    Lexing.pos_cnum = -1},
   {Lexing.pos_fname = ""; Lexing.pos_lnum = 0; Lexing.pos_bol = 0;
    Lexing.pos_cnum = -1}),
 MLast.ExApp
  (({Lexing.pos_fname = ""; Lexing.pos_lnum = 0; Lexing.pos_bol = 0;
     Lexing.pos_cnum = -1},
    {Lexing.pos_fname = ""; Lexing.pos_lnum = 0; Lexing.pos_bol = 0;
     Lexing.pos_cnum = -1}),
  MLast.ExUid
   (({Lexing.pos_fname = ""; Lexing.pos_lnum = 0; Lexing.pos_bol = 0;
      Lexing.pos_cnum = -1},
     {Lexing.pos_fname = ""; Lexing.pos_lnum = 0; Lexing.pos_bol = 0;
      Lexing.pos_cnum = -1}),
   "::"),
  MLast.ExLid
   (({Lexing.pos_fname = ""; Lexing.pos_lnum = 0; Lexing.pos_bol = 0;
      Lexing.pos_cnum = -1},
     {Lexing.pos_fname = ""; Lexing.pos_lnum = 0; Lexing.pos_bol = 0;
      Lexing.pos_cnum = -1}),
   "a")),
 MLast.ExApp
  (({Lexing.pos_fname = ""; Lexing.pos_lnum = 0; Lexing.pos_bol = 0;
     Lexing.pos_cnum = -1},
    {Lexing.pos_fname = ""; Lexing.pos_lnum = 0; Lexing.pos_bol = 0;
     Lexing.pos_cnum = -1}),
  MLast.ExApp
   (({Lexing.pos_fname = ""; Lexing.pos_lnum = 0; Lexing.pos_bol = 0;
      Lexing.pos_cnum = -1},
     {Lexing.pos_fname = ""; Lexing.pos_lnum = 0; Lexing.pos_bol = 0;
      Lexing.pos_cnum = -1}),
   MLast.ExUid
    (({Lexing.pos_fname = ""; Lexing.pos_lnum = 0; Lexing.pos_bol = 0;
       Lexing.pos_cnum = -1},
      {Lexing.pos_fname = ""; Lexing.pos_lnum = 0; Lexing.pos_bol = 0;
       Lexing.pos_cnum = -1}),
    "::"),
   MLast.ExLid
    (({Lexing.pos_fname = ""; Lexing.pos_lnum = 0; Lexing.pos_bol = 0;
       Lexing.pos_cnum = -1},
      {Lexing.pos_fname = ""; Lexing.pos_lnum = 0; Lexing.pos_bol = 0;
       Lexing.pos_cnum = -1}),
    "b")),
  MLast.ExUid
   (({Lexing.pos_fname = ""; Lexing.pos_lnum = 0; Lexing.pos_bol = 0;
      Lexing.pos_cnum = -1},
     {Lexing.pos_fname = ""; Lexing.pos_lnum = 0; Lexing.pos_bol = 0;
      Lexing.pos_cnum = -1}),
   "[]")))

As this example makes crystal clear, "[a; b]" is pure syntactic sugar for "a :: (b :: [])".

wow

just wow

is it me or is (list (quote foo) (quote bar)) a lot easier on the eyes and hands? guess meteprogramming with static languages still can't hold a candle for lisp...

*edit*: oops! sorry! that's the _output_ of the command :P

Noise

Yeah, sorry not to be clear. "<:expr< [a; b] >>" is the actual quotation. The rest is the result of that quotation, which is dominated by the repetition of the dummy source-code location. If you manage to ignore that, you can see that the AST is basically formed from simple type-constructor/token pairs.

Quoting in Lisp vs. (Meta)Ocaml

Here's the result of MetaOCaml
session. The sign # is
OCaml's top-level prompt. The interpreter's answer is printed
underneath.

        MetaOCaml version 3.08.0 alpha 016

# let a = 1;;
val a : int = 1
# let b = 2;;
val b : int = 2
# [a;b];;
- : int list = [1; 2]
# let fab = .<fun a b -> [a;b]>.;;
val fab : ('a, 'b -> 'b -> 'b list) code =
  .<fun a_1 -> fun b_2 -> [a_1; b_2]>.
# .! fab;;
- : 'a -> 'a -> 'a list = <fun>
# (.! fab) 3 4;;
- : int list = [3; 4]

As you might have guessed, .! (pronounced `run') is akin
to eval of Scheme or Lisp (only it is far better defined). The run
construct also works in the compiled code. It essentially does
run-time code compilation and linking in. There are versions of
.! that translate the code into C or Fortran, compile,
and link it in. That C or Fortran code is of course usable on its own
(so it can be saved in a file and made a part of a library).

Template Haskell

Here's what it looks like in Template Haskell:

Loading package base-1.0 ... linking ... done.
Prelude> :m + Language.Haskell.TH.Syntax
Prelude Language.Haskell.TH.Syntax> let a = 1
Prelude Language.Haskell.TH.Syntax> let b = 1
Prelude Language.Haskell.TH.Syntax> [a,b]
[1,1]
Prelude Language.Haskell.TH.Syntax> let exp = [| [a,b] |]
Loading package haskell98-1.0 ... linking ... done.
Loading package template-haskell-1.0 ... linking ... done.
Prelude Language.Haskell.TH.Syntax> runQ exp
ListE [VarE a_1627395084,VarE b_1627396234]
Prelude Language.Haskell.TH.Syntax> $(exp)
[1,1]
Prelude Language.Haskell.TH.Syntax>

where [| .. |] creates a (computation that yields a) quotation, and $() evaluates a quotation.

all this macro exercises...

in MetaOcaml or Template Haskell are very primitive. Of course that these languages have some advantages over Lisp (or Common Lisp in this case), but I would really not put these metaprogramming systems and CL's defmacro on the same level. Just try to implement something like this in MetaOcaml or Template Haskell:

(defmacro once-only ((&rest names) &body body)
  (let ((gensyms (loop for n in names collect (gensym))))
    `(let (,@(loop for g in gensyms collect `(,g (gensym))))
      `(let (,,@(loop for g in gensyms for n in names collect ``(,,g ,,n)))
        ,(let (,@(loop for n in names for g in gensyms collect `(,n ,g)))
           ,@body)))))

The advantage of Lisp is representing data and code the same way *by default*, and while you surely can implement something like this in other languages, it would be a orders of magnitude less readable.

It's Mostly Good News

citylight: You've really sold me on O'Caml there. Are there any catches I should know about?

Sure: if you don't already know any member of the ML family, O'Caml seems really, really weird at first. Some of its syntactic differences from Standard ML are only there for Hysterical Raisins. Function application precedence rules mean that you end up parenthesizing (nested (function applications)) a lot. A lot of O'Caml programmers don't seem to find the standard libraries rich enough. I disagree with them, but it's something you'll hear about if you hang around the community.

citylight: Can you really do quoting and evaluating of syntax like you can in Scheme/LISP? I'm curious as to how that could work without S-expressions... I guess I'll go have a look.

Yes, using the Caml Pre-Processor and Pretty-Printer, known as camlp4, which is included in the O'Caml distribution. There's a very good tutorial on it, too. Basically, it separates the parsing task out from the rest of the compiler and gives you tools for concrete parsing to an AST, and tools for manipulating that AST, including quotation, quasiquotation, and antiquotation, just like Lisp. In addition, unlike Lisp, you can quote any language that you have a printer for. Graydon Hoare wrote a C quoting printer, Cquot, which he used in his One-Day Compilers presentation, in which he implements a very simple DSL by way of camlp4 and Cquot. In other words, his DSL gets compiled as an O'Caml program that prints C source code, the O'Caml program is run, the resulting C source code is compiled with gcc, and the resulting C binary does what the DSL code said to do.

Catches I've hit, my two cents.

In my experience, it is a pain to set up. I got it sorta working under Windows once, but then when I went to try to use the OpenGL/TK bindings there were bugs that I never got resolved (in particular, it wasn't getting some kind of mouse events, IIRC).

The second time, just recently, was I think on my Debian/Ubuntu Gnu/Linux box at home. It is a PPC machine, and old Mac. Those details might explain why O'Caml couldn't even load any of the 'standard' libraries.

Maybe blame the installer rather than the language, but the whole gestalt has been very frustrating. And even if I got it to work, what if I wanted to give my code to somebody else to use? With Perl or Java (I shudder to think of using either, really) at least it is a lot more likely they will be able to use what I've written.

a second look

I have played a little with OCaml. I'll take another look.

"A text editor like Emacs but

"A text editor like Emacs but using this language rather than elisp:" Check. That would be Efuns, linked above.

Strictly speaking I think an editor like Emacs has to at least compile and run using the normal compiler :-)

Even though everyone loves O'

Even though everyone loves O'Caml and says it's the answer and what you're looking for, I still sympathize with your desire to make such a Lisp. There's just something innately appealing to me about S-expressions. They feel good.

They feel good, sure...

...but they're also provably sub-optimal from a theoretical point of view.

(Though this could be fixed if you got rid of car, cdr and cons.)

.

"...but they're also provably sub-optimal from a theoretical point of view."

Could you explain this for me? Or point me to an explanation?

What do S-expressions have to

What do S-expressions have to do with car, cdr, and cons?

Besides, even in S-expression based languages which have these operations, getting rid of them makes no more sense than getting rid of the equivalent operations in Haskell and ML.

Exactly my point.

Lspers conflate the notions of external representation, internal representation and evaluation strategy.

Thus, if you're going to talk about "s-exprs" or "code-as-data", you better well first specify what exactly you mean by "s-exprs" and where they're used.

Anyways, my point is two-fold:

- As external representation, I shouldn't care at all how s-exprs are implemented internaly. It's just convenient syntax.

- As internal representation, linked lists do not work. Do I need to explain exactly why?

- Tying in with the previous point, if your computation model is based on munging linked lists, then your language has serious design problems. (Again, for reasons I shouldn't have to spell out, I hope.)

Making it explicit

Do I need to explain exactly why?

Yes, if you want to be taken seriously. As I explained before, this isn't a forum for "opinions". It is a forum for informed professional discussion. If you don't have any new ideas or results (or links to new research etc.) to add to the discussion, simply mentioning your opinion is a waste of time.

I ask you personally to keep this in mind. The recent threads you participated in resemble trolls, and do not live up to the standard of discussion we like to have around here.

Nice.

Lack of comprehension on the part of some readers does not make a "troll"; in any case, I realize that the backlash simply comes from the fact that I had the gall to badmouth Lisp. (Making fun of Java never earns someone a "troll" label, for some reason.)

Back to the topic, though: all modern languages use either parse trees or random-access arrays for internal representation. Using linked lists is stupid because they are equivalent to trees and arrays but with demonstrably poorer algorithmic characteristics. (Precisely because they are "linked", i.e. linear.)

Using linked lists for a computation model is likewise a poor design choice because you will then need a supporting runtime for memory management that is implemented in some other language. In other words, your language isn't self-supporting.

For example, there are Prolog implementations that do not require a garbage collector. (I cite this not to advocate any particular language, but to point out that proper design decouples the computation model from the supporting runtime.)

You are conflating compile time and run time

The data structure used to represent a programming languages terms at compile time in no way imposes any constraints on the runtime representation.

and while we are on the topic of memory management, I'd like to point out Daniel Wang's dissertion, which discusses how to implement a garbage collector in a type safe language.

ftp://ftp.cs.princeton.edu/techreports/2001/640.pdf

Duh, that was my point.

In the original message I complained about the fact that Lisp uses the same data structure for parsing and for runtime representation.

Now this is easily fixed in the obvious ways, (just use two different structures) but such a language wouldn't be Lisp anymore. (Because you couldn't use 'cons' and 'cdr' and friends for modifying runtime structures.)

Which is why I complained about the existence of 'cons', 'car' and 'cdr' in the first place.

Thanks for the link, by the way. I haven't had the time to read it through yet, but a much more interesting question to me is whether or not memory management can be done statically at compile time, and if yes -- to which degree and under which conditions.

Region Inference

tkatchev: Thanks for the link, by the way. I haven't had the time to read it through yet, but a much more interesting question to me is whether or not memory management can be done statically at compile time, and if yes -- to which degree and under which conditions.

See the MLKit as a good launching-off point, then Google "region inference." The executive summary seems to be that region inference can work in all cases in the simply-typed lambda calculus with let polymorphism, and that implementation in the full Standard ML is a conservative extension. The downside seems to be that, using region inference alone, memory consumption can go to 5x what is actually required by the algorithm(s) in question, so for memory-constrained environments, a combination of region inference and real-time GC seems to be called for.

Yeah, I know. :)

Actually, the real question is how space complexity for static memory allocation compares to a runtime GC.

I hope they are big-O comparable, but a paper linked to on the MLKit site states otherwise:


As mentioned in the preface, the present version of the ML Kit supports reference-tracing garbage collection in combination with region memory management [Hal99]. While most deallocations can be effciently performed by
region deallocation, there are some uses of memory for which it is diffcult to predict when memory can be deallocated.

no.


In the original message I complained about the fact that Lisp uses the same data structure for parsing and for runtime representation.

No, Lisp does not. Stop trolling.

Who's the troll?

You confuse implementation with the abstract computational model that is implied by any programming language.

Links?

For example, there are Prolog implementations that do not require a garbage collector.

Interesting, can you indicate a link or a reference?

Linear Lisp

Lively Linear Lisp -- 'Look Ma, No Garbage!'...
Linear logic has been proposed as one solution to the problem of garbage collection and providing efficient "update-in-place" capabilities within a more functional language. Linear logic conserves accessibility, and hence provides a mechanical metaphor which is more appropriate for a distributed-memory parallel processor in which copying is explicit. However, linear logic's lack of sharing may introduce significant inefficiencies of its own.

We show an efficient implementation of linear logic called Linear Lisp that runs within a constant factor of non-linear logic. This Linear Lisp allows RPLACX operations, and manages storage as safely as a non-linear Lisp, but does not need a garbage collector. Since it offers assignments but no sharing, it occupies a twilight zone between functional languages and imperative languages. Our Linear Lisp Machine offers many of the same capabilities as combinator/graph reduction machines, but without their copying and garbage collection problems.

Very impressive, thank you.

Exactly what I was looking for.
(Though I still don't see a way of implementing a satisfactory n-ary tree with this model.)

N-ary tree

Why not simply use arrays or structures for nodes? Then you select a child node with AREF. Very simple, and O(1) access with respect to N.

Why not?

Because Lisp does not include O(1)-access structures in its computational model.

And no, hacking an opaque "array" object into a Lisp compiler does not count. If you do that, you're simply putting a Lispish syntax on top of C, and this isn't a very interesting excercise.

For example, this does not give you any advantages C++ does not already give you, except perhaps some very useful syntactic shortcuts.

Well I would say that a cons-

Well I would say that a cons-pair have O(1)-access.

And by the way: what Lisp are you talking about?

Hm.

O(1) in relation to what? :)

Big-O notation is a way of comparing functions, and a single cons-pair doesn't have any free variable you can measure.

In relation to which element

In relation to which element you want. ie car or cdr. of course you have to view the cons as a function of the functions car and cdr which then becomes the free variable.

still: What lisp are you talking about?

whats wrong with make-array

whats wrong with make-array
and aref ?

(in scheme:
make-vector and vector-ref/vector-set!
.)

if your reference to the "computational model" of lisp (which lisp, btw?) is meant to mean the lambda calculus, then even numerical datatypes and cons-cells are "opaque objects" in lisp.

(as far as i now this is even more true for haskell, where 1 is a function that yields a value representing 1...).

if your criticism is targeted at s-expr then #(1 2 3) should be a sufficient answer ;)

ps: regarding lambda-calculus when was the last time you have seen anybody programming a turing maschine, which is the computational modell of most imperative languages? (this is just to show that this comparison is not buying you anything ;) )

(defun kons (kar kdr) (l


(defun kons (kar kdr)
  (lambda (selector)
    (funcall selector kar kdr)))

(defun kar (kons)
  (funcall kons 
           (lambda (kar kdr) kar)))

(defun kdr (kons)
  (funcall kons
           (lambda (kar kdr) kdr)))

just to show you that cons-cells are not the heart of the "computational modell" of lisp, lambdas are :)

Point taken.

Thanks, interesting.

Thus, you're using closures as cons-cells. Fascinating idea, but I'm still not sure whether this ultimately affects anything. (Could you build more complicated data structures on top of closures? I'm guessing that yes, you could.)

How is TM imperative?

a turing maschine, which is the computational modell of most imperative languages

Last time I checked, TM calculated a function from (finite) bitstring to bitstring. What is so imperative in that? There are some attempts to stretch the definition to cover a function from (infinite) bitstream to bitstream, but I still fail to see how it is imperative.

turing maschine stack maschine

(disclaimer: i have no deep knowledge about turing maschines, in particular not about the mathematical theory underneath them, so this is just how i understand things)

turing maschine:
* a infinite sequential bitstring
* a current position in the bitstring
* a statemaschine
* a current state in the statemaschine

the statemaschine has the following instructions which can be run during statechanges: write position, move position. the state-changes can be influenced by the value of the current position.

the result is calculated by executing state-changes ( == imperativ, one after another) until i reach a final state.

Turing machine and imperative languages.

Not true. Most imperative languages model some sort register machine, and register machines have (conceivably) different complexity characteristics from Turing machines. Moreover, even different flavors of register machines have different complexity characteristics, so this is a difficult question that you cannot just dismiss so bluntly.

Back to Lisp, though: I've a hard time buying the notion that Lisp models the lambda calculus; at the very least, I don't see how cons-cells and lambda calculus relate to each other.

In short, my criticism of Lisp boils down to the opinion that any computational model based on cons-cells is broken in many ways. Get rid of cons-cells and I'd take my words back; but this brings up the question of how you can have any sort of Lisp without cons-cells.

No.

Saying "O(1)" is a shortcut for "f(n) inf, C=const".

Saying that "car/cdr" is O(1) is meaningless since there's no f(n) you could conceivably measure here.

O(1)

O(1) is indeed a common shortcut for saying "in constant time". However, the issue tkatchev seems to want to discuss is not the access to cons cells but rather the access to lists. And, it is true that a list of length n, accessing an arbitrary element is *not* O(1).

BUT: As was clearly explained here time and again, Scheme had vectors, and Lisp has it's version of constant time vectors, so the fact that lists behave differently is besides the point.

Stop beating a dead horse, it's really boring.

It's not a dead horse.

Your post amounts to the following: Lisp and Scheme are front-ends to C with annoying syntax.

Fine, if you find that sort of thing exciting, I can understand that. Me, personally, I don't want nor need the existence of a front-end to C with s-expr syntax, I find coding straight C comfortable enough as it is.

The crux: whether Lisp is simply syntactic sugar for list manipulation for those who love parentheses, or whether there is really something fundamentally useful in Lisp after all.

that's it!

I really tried to keep up with the rules for LtU, but if even Ehud can't stop feeding a troll, neither can I.

"Lisp and Scheme are front-ends to C with annoying syntax."

And you seem to be a front-end to a boring, tireless Turing-machine with really annoying syntax and semantics.

You whole posts against Lisp come down to this: you don't like the syntax. That's it. And here we are wasting our time debating why you should like Lisp's s-exprs...

"I find coding straight C comfortable enough as it is."

So, there you are. You dislike it so much you feel it's fine to give up all its power, flexibility and high level programming in favor of a small, ancient, severely limited imperative language just in order to have O(1) access with a simple syntax like a[number].

yep, that sounds reasonable and very edifying...

"whether there is really something fundamentally useful in Lisp after all."

I take it for granted that just its lifespan and influence over the industry are proof to that. Of course, the same can be said of C. Still, like Paul Graham noticed, languages have been progressively adopted the Lisp model. And like Guy Steele, noticed, Java is halfway there from C++...

rant off. I will ignore your posts about your dislike for Lisp's syntax from now on. Have a nice day.

No, no.

The question is framed thus: is Lisp/Scheme really more powerful or flexible than C modulo the syntactic sugar?

I'm guessing that it's actually less powerful and flexible, albeit with very convenient and easy-to-use syntax.

N.B. My point wasn't that I don't like the syntax of Lisp. (Quite the opposite, I like it very much.)

troller to no end

"wasn't that I don't like the syntax of Lisp. (Quite the opposite, I like it very much.)"

earlier:
"Lisp and Scheme are front-ends to C with annoying syntax"

yep, a genuine troller who changes the arguments according to the moment.

or perhaps just a masochist who enjoys "annoying syntax" "very much"...

it was fun for a while, mr. robot...

[Admin]

Robot or not, I stand by my suggestion to ignore him.

I think it is better for LtU if we just notice that the message is bogus, and stop responding.

[Moved]

[Moved]

Really?

I did have some hopes that the person behind this will go away, without us having to resort to such means. But it seems he doesn't have shred of decency.

Currently, I am at the point I will try anything (and by this I do mean anything). I do not have the time to run around after him, nor do I want to.

I suggest disucssing this (a)in the etiquette thread or (b) in the "site discussions" forum. The current thread really isn't the right place.

Suggestions of the sort "Ehud should do more work" aren't welcome, and will not be accepted. Has the community has indeed stopped working?

rmalafia Incorrectly Attributes Statements To Tchatkev

Tchatkev did NOT say
"Lisp and Scheme are front-ends to C with annoying syntax"
He DID say
Your post amounts to the following: Lisp and Scheme are front-ends to C with annoying syntax. Fine, if you find that sort of thing exciting, I can understand that. Me, personally, I don't want nor need the existence of a front-end to C with s-expr syntax, I find coding straight C comfortable enough as it is.

Note that what he actually said is quite different from what you CLAIM he said.

Unfortunately when people cannot follow WHO says WHAT, then misunderstanding is bound to follow.

Whatever was said...

...I can't see how this line of discourse adds to the sum total of knowledge. Indeed, I think we are all a little dimmer for having followed it through this far.

Let it go.

Get over it.

correction

"Note that what he actually said is quite different from what you CLAIM he said."

Ehud never said "Lisp and Scheme are front-ends to C with annoying syntax", so I guess the credit goes all to tchatkev... he was the one claiming something.

gosh...

Playing boyscout

You seem to be talking past each other.

Which of the following is your point then (multiple choices are ok)?

  1. cons/car/cdr cannot be expressed using only lambda abstractions (see Lambda: the Ultimate Declarative, section 3.2, or the code by slom)
  2. cons/car/cdr are so important for Lisp culture (as opposed to computational model) that they must be given the first-class status (for performance reasons? for reasoning reasons?)
  3. the computational model of Lisp language (which Lisp? :) ) is based on cons/car/cdr
  4. cons/car/cdr, either native or defined via lambda, take more than constant amount of underlying computation medium resources (e.g., processor ticks)

Heh.

Points 3 and 4, but with the following qualifications: a) you don't need lambda calculus for expressing the Lisp computational model and b) you can't talk about the complexity of car/cdr/cons, since complexity is a property of an algorithm being executed on a computational model, and not of the computational model per se.

As for answering the question of "which Lisp" -- to do that, you'd need a more strict definition of what the Lisp computational model actually implies. This is a fault of the "Lisp culture"; Prolog or C programmers have no problem in that regard.

[Admin]

I don't see how this discussion is related to the original post. This thread is all over the place.

The last message (the I am replying to) is clearly a troll in my mind, and I suggest ignoring it.

...

...

Clean?

Not very interesting, I believe.

The ones I've seen blow the stack in certain special cases.

Kawa

If you are familiar with Java and you like Scheme I think you might like Kawa very much: http://www.gnu.org/software/kawa/

I have used it in a few commercial projects with excellent results.
By the way, although you can use it in an interactive REPL (as any other Scheme implementation), Kawa is actually a compiler which generates JVM byte codes, and it even allows you to optionally declare the types of variables to generate code which is practicaly indentical to the one you would get by writing in Java and using javac.

"I don't want to load a SRFI

"I don't want to load a SRFI number, I want the Scheme equivalent of saying import java.util.foo with a name that maps to functionality."

Um...I'm not sure what you mean by this. Java's import command is just syntactic sugar to keep you from having to fully qualify all identifiers. It doesn't really do anything except tell the compiler where to look for any unqualified identifiers.

I fail to see any significant difference between Java's import & loading an SRFI.

"I don't want to import an object system. I want it to be there and feel like it's built-in."

Then I wouldn't think you'd want Scheme. There are plenty of languages that force a specific object model on you.

Scheme's basic philosophy seems to be to provide little more than the primitives needed to add whatever you want to the language. I can't tell you how much time I wasted trying to shoehorn alternate object model onto C++ or Java for specific purposes. Scheme makes it much easier.

(I actually like C++'s object model a lot, but I don't really consider it general purpose. There are some things I'd choose it for, but many things I wouldn't.)

"I understand the appeal of Scheme's minimal definition, but when I'm using it for practical purposes, I don't want the API to feel tacked on. [...] I guess what I want is a Scheme that reminds me of Common Lisp while not being as ugly as Common Lisp is. I need a full-fledged toolbox, and I need it to feel seamlessly integrated."

Well, although CL may come with a huge standard library out-of-the-box, that ugliness makes it feel tacked on to me. Many of the non-standard libraries I've used with Scheme feel much less tacked on.

Having tried both, for the moment, I'm preferring the con of not-a-full-standard-library to the con of a-really-ugly-standard-library.

So far, for the things I've been doing, putting together an implementation & specific libraries to get practical work done has been relatively painless.

"Lastly, what do people think about implementations? I keep shopping around for one that I like, and I keep coming up empty."

Good question.

When I first tried Scheme, I used guile, since it was already installed on my workstation. This time I choose scsh because I'm most interesting in finding a replacement for perl & ruby.

And, I tried really hard to like Emacs but failed.

I do wonder about the scalability of CL or Scheme along the number-of-coders axis. When you have 20+ programmers working on code that was originally written by a separate group of 20+ programmers... The limitations of a language like Java might actually be a benefit in these environments. On the other hand, if CL or Scheme really makes programmers more productive...

It's the usability

I fail to see any significant difference between Java's import & loading an SRFI.

I believe most humans deal better with words than with numbers, when it comes to giving things names. Our children are not given binary strings as names, etc. :-)

Yes... I suppose it was a sma

Yes... I suppose it was a small nitpick but I agree. It's a small issue, but usability seems often to be in the details. I just wish that dealing with the outside universe in Scheme could feel as natural to me as the rest of the language, which I love.

implementations

When I first tried Scheme, I used guile, since it was already installed on my workstation. This time I choose scsh because I'm most interesting in finding a replacement for perl & ruby.

I've also been searching for a really good, really fast implementation for OS X/darwin PPC. On my x86 machines I just use MIT Scheme because I never do much non-trivial development on them, but I'm currently on the market for a sexy and fast, R5RS-compliant Scheme that'll compile on OS X.

And, I tried really hard to like Emacs but failed.

Emacs is strange. I've been trying to learn how to use it for a few years now--I even used the much-vaunted SLIME mode when I was teaching myself Common Lisp--and I always find myself going back to TextWrangler or BBEdit or metapad (whatever's installed on the system I'm using)... it seems that once I have a mastery of it it'll be the best editor on Earth, but so far it has twarted my attempts at every turn. What is the best way to learn Emacs? is there some trick I'm missing?

Re: learning emacs

(Not to get off topic too much, but) Emacs comes with a built-in tutorial. I used it when I first starting learning Emacs and I thought it helped. It is described in many web pages, e.g.: this one care of somebody at U Chicago. Basically, you press the Esc key, then x, then type in "help-with-tutorial" and the Enter key (in Emacs speak, that is all "Meta-x help-with-tutorial"), which will launch the tutorial for you.

Emacs fluency and Lisp

I've found the key to Emacs fluency (and Lisp fluency via Emacs) is relentless customization and automation. Emacs is designed to be extended and made more friendly in baby steps far more than any other editor I've used. Along these lines, you should spend quality time with:

  • "C-h v" (apropos-variable) and "C-h a" (apropos) to look for features you might not know about;
  • "C-h k" (describe-key) to find the function names for interactive commands, so you can combine frequent sequences into your own functions;
  • "M-x find-function" to see how editing commands are implemented, a wonderful repository of Elisp (and C) best practices.

Emacs fluency

We were discussing this around the office today and I think the most important thing is vocabulary. If at any given time there are a couple of new Emacs commands that you're forcing yourself to use then over time you will become fluent. The best way to discover good commands to learn is by watching over people's shoulders and the second best is probably using keywiz.el.

Here are some life-improving commands: beginning-of-line, exchange-point-and-mark, forward-sexp, transpose-chars, ediff-revision. It's also worth learning these alternatives to harder-to-reach keys: tab = C-i, return = C-m, forward = C-f, backward = C-b, down = C-n, up = C-p, backspace = C-h (requires config). This seems really sick to most people but the ones who practice it love it -- and that's best kind of thing to learn!

If you ride the bus then the Emacs Manual from GNU Press is good too.

Bigloo

I wonder why nobody mentioned Bigloo...

rkb

re: Bigloo

Bigloo looks really cool. I noticed that the current version hasn't been updated since about a year ago. Is that because it is really stable? Are you (rkb) an experience Bigloo user, and, if so, can you summarize your opinion?

re: Bigloo

Bigloo looks really cool. I noticed that the current version hasn't been updated since about a year ago. Is that because it is really stable?

Yes, Bigloo is stable and mature, and it's actively developed and maintained. It's been successfully used in commercial projects, such as for the implementation of the Roadsend PHP compiler.

Compiling hello-world with Bi

Compiling hello-world with Bigloo has created programs that segfault immediately for me on each of the three different computers I've tried it with over the years.

Bad luck or a forgiving definition of "stable and mature"? :-)

Lucky Luke

(Subject line refers to this character)

I just tried the following on a Debian machine that didn't have bigloo installed. First, I created a file called hello-world.scm:

(module hello-world)
(display "Hello, World!  Apparently, Luke is unlucky! ;oP")
(newline)
Then, I installed bigloo with apt-get install bigloo, and compiled my program with bigloo hello-world.scm, which generated a 7416-byte binary (that's on a 64-bit OS; the 32-bit version might be smaller). I ran it, and it printed the following message:
Hello, World!  Apparently, Luke is unlucky! ;oP

There you have it - computers don't lie!

Keep in mind, though, that Bigloo compiles through C, so one way problems could occur is if your C environment isn't set up correctly, or isn't compatible with Bigloo for some reason. I haven't encountered that myself, I'm just speculating as to the reason for your luck.

DrScheme and Emacs work fine together

PLT falls short because it depends too much on DrScheme, and I want to use Emacs.

I use emacs with DrScheme all the time. I typically have a top-level file, say top.scm, which I open in DrScheme. This file requires the other modules as needed. When I need to edit one of the other files, I do it in emacs, save the file, and run top.scm again. Voila!

Once in a while I'll need to look at the cross-referencing, etc, from check-syntax, in which case I'll open the needed file in a new DrScheme tab. But I always edit the file in emacs.

Have you checked out Qi?

If you want to try out a Lisp-like that has a powerful type system, you should check out Qi. Qi compiles down to Lisp so you always have the full power of Lisp available. I think you may like it.

It has a lots of nice features like..

  • A very powerful type system (Turing complete typing as in Haskell)

  • Pattern matching
  • Native backtracking
  • Macros
  • Automated currying and partial application
  • No Monad barrier

Qi homepage

Dr. Mark Tarver's EXCELLENT book on functional programming and Qi in general.
Honestly, this book doesn't get enough praise. And it's free too!

Google Code Qi download site
My Qi Blog

But also stick with Haskell. Start reading papers with results in Haskell. Figure out the typeclass system. Figure out functional dependencies. Figure out monads.

If you are finding Haskell "hard" then that should tell you that you are learning a lot when you are learning Haskell. That means you should do more of it. Check out Qi along with it. I personally find the dual language approach best.

And if you do try it out, let me know what you think!