Ruby: Language of the Programming Ãœbermensch?

I spent a couple of hours this evening writing my first real Ruby code for the Lexical Semantics course I am taking this fall. It's excellent. The syntax is very appealing. Tokens are strangely verbose, replacing "{" and "}" of C descent with "def" or (insert block starting token here) and "end". For the first 30 seconds after encountering this on page 1, I wasn't sure how I felt about it. It seemed verbose. But now I see that "end" is much easier to type than "}", whose presence in Java forces me to reach and twist for basic program vocabulary. Maybe it would be different if I weren't on Dvorak, but words as delimiters rather than punctuation is a definite win. And while the tokens are fat—in a good way—they are also few. The syntax is remarkably terse, but not at the peril of clarity as I feel is the case with Perl. Ruby makes me understand the power of judicious syntactic support for common tasks. String interpolation is an obvious and immediately addictive feature. Built-in regular expression literals are also a plus. And there is an elegant interplay between these syntactic features on both a functional and visual level. As syntax is concerned, Scheme has represented a year-long exile in the wilderness. The bare minimalism of S-expressions was good for me. Scheme's uniform and parsimonious syntax let me focus on concepts that are fundamental to high-level programming: recursion, higher order procedures, and data abstraction. Scheme taught me by giving me only a handful of powerful tools and then training me to use them well. Now I think that Ruby can empower me by equipping that sharpened outlook with richer facilities for the completion of common tasks. Its assumptions are friendly to my most cherished and hard-won programming intuition, but they also cater to the harsh realities of programming in an imperative world.

Paul Graham says that languages are evolving ever-closer to Lisp, and he asks, why not skip directly to it? And I think that I finally have an answer. Perhaps the ideal programming experience is purely functional, and the mainstream's gradual adoption of features pioneered by functional languages reflects this truth. But there are other truths. Tim O'Reilly presents another point. He says that as new computing paradigms emerge, new languages seem to rise with them, suggesting that from a pragmatic standpoint, a language's "goodness" is sensitively dependent on the world in which it is used and with which it must interact. Every time I have programmed functionally for practical applications, I am keenly aware of how imperative the world outside my program really is. The operating system doesn't behave functionally, and I/O operations certainly never could. There has to be a reason why these languages are so popular, beyond the simple fact that they are easier to learn for programmers whose first language was C. My conclusion is this. In the real world of computing, one finds explicit notions of state; one finds assignment. The computing hyperscape is not (yet, perhaps) very functional. State-oriented computational objects seem a natural complement to our false intuition of objects existing in the real world as well. Nietzsche would say that there are no objects, and, indeed, there aren't even any facts. There is only "will to power". Sounds remarkably similar to me trying to explain to C programmers that there is no data, and there isn't, in fact, any need for assignment. There is only Lambda. I think Nietzsche is right and I think Steele and Sussman were right, but that truth does not mean that the illusion of objects is an utterly worthless one. If we actually cognized the outside world as consisting only of "will to power" rather than tables and chairs and people, we'd never get anything done. And perhaps, similarly, when we pretend that everything is a Lambda, we face similar difficulties in interfacing this remarkably beautiful, completely true notion with an ability to do anything about it.

These ideas, are, I guess, nothing new. Haskell's monad system, from the cursory understanding I have of it, is a formalization of them, a clean interface between the rough and tumble outside universe and the sublime purity of Haskellspace. But if I'm not going to use an airlock, maybe a clever, elegant, and even artistic bridge between functional wisdom and imperative truth will suffice. For now, Ruby seems to be a pretty decent attempt. Lambda may be vulgarized into a quirky codeblock, but in a language in which shell commands are syntactically supported, at least it exists.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

The favorite anti-functional straw man

So many arguments against functional programming are of this form: pure functional side-effect-free programming is impractical, in the author's estimation; therefore, imperative languages with a vaguely functional bent are a good idea for the practically-minded.

There may be all sorts of practical reasons to pick a more mainstream language, but this is not one of them. Haskell's monads aside, languages like Scheme, Lisp, OCaml, even SML support side-effect-oriented programming just fine, and programmers in those languages use side effects whenever they need to. I hereby pronounce this particular argument nothing more than an excuse to stick to the familiar.

Programming after the death of god

If lambda is the ultimate what is next beyond the death of god?

FPs are not impractical but they are equipped with a particular mindset. Haskell is just a showcase of this mindest like Phil Wadler pointed out in his very beautiful introductory essay about monads where he motivated monads in Haskell by mentioning the cartesian split between res extensa and res cogitans. In Descartes philosophy the inner world is the world of reason and certainty while the outer world is that of matter, chance and deception. The pineal gland was the interface capable to relate one to the other. The Nietzeschean perspective is simply the opposite: there is no such split but the interior world of reason is itself an illusion. More than this it is a perverted attempt to empower the mind over the body the idea over the thing the abstract over the real. A contemporary german philosopher ( F.Kittler ) once provoked his readers alleging that there "is no software" at all. He went thus far recommending programming assembly because it is closest to the real ( just a few of my colleagues would support him but they will hardly accept a language abstracting more far from the machine than C does - they want to read assembly if they need it ).

Thus the real is not a residual a shining glance of a superiour world of eternal forms but the latter is a secondary illusion a simulacrum in the realm of eternal change.

Of course I don't try to answer the question whether the tragic-heroic programmer uses Ruby ;) This labeling transgresses the self-styled body-mind into direction of commodities where he ends up as a farce.

Kay

The philosophy is interesting

The philosophy is interesting and is really about the nature of perception. The real is never known except in terms of our mental categories. It is always a matter of comparing a category to a thing and looking for a match. For a programmer or an engineer there is a nice theory of how all this works called “Syntactic Pattern Recognition”. I would really like to post a good link but I don’t seem to see one. A good book is “Syntactic Pattern Recognition” by Gonzalez and Thomason. The book begins with formal language theory and shows how grammars can be used to represent syntactic structures and goes on to show how these structures may be parsed and recognized.

There are obvious parallels with computer science. A program linked to an environment can be seen as a syntactic pattern. When the program works it means the environment is the one the program was designed for. (ie it recognizes its environment).

edit: Further, functional and logic programs without side effects can be seen as the pure categories (ie they are what they are, and they always work). When side effects are added there is an implied compairson of the category to the environment. When the program works it means that the environment is matched.

Since we're talking about ruby...

Can ruby model the dynamic-wind construct (essentially unwind-protect updated for continuations)? This ll1 message says not, but I'd like to hear something authoritative.

Lisp is not purely functional

Paul Graham says that languages are evolving ever-closer to Lisp, and he asks, why not skip directly to it? And I think that I finally have an answer. Perhaps the ideal programming experience is purely functional, and the mainstream's gradual adoption of features pioneered by functional languages reflects this truth.

Lisp allows imperitive and object-oriented programming just as naturally as functinal programming; it is a multiparadigm language.

Or perhaps Lisp isn't the pinnacle

Not to flame Lisp (in all its varieties), which is a fine language, but.

It is often alleged that Lisp is the "best" (for some figure of merit) programming language out there currently. Such arguments are often based either on sheer number of features (CL) or an elegant and simple yet powerful programming model (Scheme). While I may not necessarily agree, this argument doesn't trouble me--every useful programming language has its advocates, and Lisp advocacy is a far more reasonable proposition than, say, Visual Basic advocacy.

However, I get annoyed when proponents of any language make the claim that their language represents the pinnacle of computer science, and is never to be exceeded or improved upon. Graham's comment reeks of such hubris. I'm not interested in PLT design and such in order to create something which is a poor imitation of Lisp; I'm interested in going beyond it. If Lisp is really the best we'll ever do (or Smalltalk, or Haskell for that matter), we might as well shut this site down, as we're all spinning our wheels.

Of course, Graham's comment may well be directed at industry, which tends to be more concerned with face-saving (though they call it "risk management") and repeatability than it is with advancing the state of the art. From a technological point of view, practice certainly has a long way to go to catch up with theory; though that's often the case in any mature discipline.

If Lisp

Were statically typed and supported arbitrary syntax extensions (not requiring the obligatory parens), then I would probably agree that Lisp is more or less the best language out there. I think it truly was ahead of its time (though time is catching up quickly!).

The best programing language

The best programing language is a lisp dialect. ..is probably what many people mean then they say that lisp is the best programing language. And i don't think thats so strange. what features of lisp do you not want in your language? shoulden't the best language be able to do everything?

Obviously Graham isn't thinking that the current dialects of lisp is the best it can be. Why else whould he be developing his own?

I'm not objecting to praise of Lisp

...just the claim (where it occurs) that it represents the apogee of computer science--for now and forever into the future.

While there may be an asymtopic limit that we may approach (but never equal or exceed), I've no idea where it is. I highly suspect we have a long way to go.

We've had this before.

And i don't think thats so strange. what features of lisp do you not want in your language?

For one, I don't want the fact that O(1)-access arrays are not implementable in native Lisp.

Talk about brain-dead...

What means "native"?

Vectors are part of the standard Scheme.
http://www-swiss.ai.mit.edu/~jaffer/r5rs_8.html#SEC63

So?

The underlying implementation needs C or assembly.

There is no way to express O(1)-access arrays in Scheme without dropping down to some other language.

All languages become machine code

If vectors are part of the standard language they are so "native" as lists.

This is simply untrue.

Would that be 486 machine code? Or maybe hyperthreaded machine code? Or maybe distributed heterogenous cluster machine code?

There's no such thing as simply "machine code". There are, however, different computational models; and it is the job of the compiler to convert between two of these computational models.

If you need a third language to act as an intermediary for doing elementary operations, there are obvious problems in your design.

Who says a third language is necessary?

* Vectors are part of the Common Lisp standard; other Lisp dialects as well. Which makes them part of the language. You seem to have this notion that any data structure which cannot be mapped onto conses is somehow "foreign" to Lisp--Lisp has had data structures other than conses/lists for DECADES.

* Many compilers for Lisp directly translate between Lisp source and the instruction set for whatever architecture you like (whether a physical CPU, a virtual machine instruction set, etc.) without resorting to any intermediate languages.

While lots of compilers for prototype languages will target things like C, C--, LLVM, CLR, or JVM rather than targeting any particular machine instruction set--that doesn't mean that the language is defined in terms of that compiler's target.

If Lisp is really the best we

If Lisp is really the best we'll ever do (or Smalltalk, or Haskell for that matter), we might as well shut this site down, as we're all spinning our wheels.

Well, no one likes a smug lisp weenie. And you can imagine superior languages to Common Lisp or Scheme. For example, there's something very 2D about using lists in structuring code; maybe you could win something by using a more sophisticated object. (And incidentally for those curious, Lisp code isn't represented ONLY as lists; in any given s-expression, you might find many different kinds of objects.)



Further, being "a Lisp" is not mutually exclusive with being another language too. This notion is no harder than the concept of multiple inheritance. ;) So for example, manipulation of things at read/syntax/compile times can be merged with concepts from other languages.



But while I don't believe in some "best" language, I'd certainly want one informed by Common Lisp, as well as every other relevant idea. As my personal preference.

Agreed... though what is a Lisp?

Agreed. However, the definition of a "Lisp" seems to vary based on the context and the author. One extreme position has it that Common Lisp is the only language that has the right to the name (based on the claim that the ANSI standardization of Common Lisp served to unite the community behind a single flavor of the language), and that other dialects of Lisp (other than legacy applications like elisp in Emacs) are imposters. This accusation is often levelled at Scheme.

Others seem to define a Lisp is any language significantly influenced by its design. Dylan is certainly a Lisp under this definition (even though the syntax is far more Algol-like), perhaps Cecil as well. A middle definition is any language which uses sexprs for syntax and has conses as a fundamental data structure.

Designers of new languages have a duty to be familiar with a wide variety of existing ones--both good and bad. And CL should certainly be on the list.

Mooo

I love Lisp... (not the Common one, though). I didn't write my original post to claim that any existing language was penultimate or to trash FP. At least I didn't mean to. I don't really think that Ruby is somehow the Ãœberlanguage, although I must admit it's been the source of a lot of guilty pleasure in the last couple of days. Even though I didn't bring Nietzsche into this angle in the original post, I think his ideas correspond with my tentative conclusion. He advocates an approach to reality and valuation that rejects many of the principles I once held as unassailable... He dethrones rationalism, calling it an "expedient falsification", based on an imagined perfect higher reality, just like religion. It is foolish, he said, to reject the world as it is after comparing it to some Truth grounded only in one's imagination. It makes me wonder if the road to a great programming language might be more subtle and complex than I had thought, hinging not just on the pure, the platonic, the rational, but also on the artistic, spiritual, and other unquantifiable qualities. Nietzsche's world is anything but rational, and his ideal human finds a way to embrace this chaos. Beyond good or evil, which are both founded upon imagined,"pure" alternate dimensions, he held creativity in highest regard, creativity that engaged the irrational vagaries of the Real. What would he have to say about Haskell? Would his Ãœbermensch write in Perl, or Scheme? I guess his point would probably be that there is no One Language, just as there is no One Truth. And maybe I'm on crack :-). But at least a few people found it salient enough to honor it with a response. And to Anton, I'd say that maybe I'm looking for an excuse to sacrifice purity for the sake of what I want to accomplish. I'd say I'm at least as familiar with Scheme as any other language. Indeed I love it. But to argue that Scheme interacts with the outside universe as well as Ruby seems a bit optimistic. I guess you could argue that it's just a question of popularity rather than a design choice that makes me feel like a second class citizen when using Scheme for real-world tasks, but I'll need greater convincing. It seems that every language makes choices and trade-offs . Just because the impure FPs support the imperative doesn't necessarily mean that they are optimized for their surroundings, just as function pointers in C don't make it a perfectly good language for higher order programming. But I do see your point, and I do feel like a bit of a weenie for learning an imperative language so I can use a web framework rather than mastering monads. I guess my thoughts of late are: does this guilt make sense? Is my attraction to conceptual purity well founded, or is it just an excuse to run from the real world, pissing away my will to power?

Monads

My feeling on monads is that they are a desperate tool of the FP community to retain more purity than is really necessary. They may have theoretical value, but the fact that you can't sit down and explain monads to someone in 5 minutes if they don't already have a pretty deep understanding of FP tells me that there is something wrong. There must be a better way.

There are a lot of things in

There are a lot of things in CS/Maths that can't be explained in five minutes. This doesn't render any of them impractical.

Also, by saying 'desperate', you are implying that even the people who discovered and/or used them are not happy with them. How do you know that ?

I didn't understand monads in five minutes, but I think every minute I spent trying to learn was well spent. My expectations are the opposite of yours: the better the concept, the longer it takes to master (just by itself or by way of its prerequisites).

I don't know enough about mon

I don't know enough about monads, but I think I agree. I have no issue with learning something hard if it is powerful. The only issue I see as unfortunate is that difficult concepts can serve as a barrier to adoption, which means a barrier to rich development around a language that can be nice for its users. It pays to have some mindshare and community support. If there were a killer web framework written in Haskell you can bet I'd be pouring over monads right now. Instead they're on the back-burner. I have to balance my curiousity about languages with my desire to be creative and supported in the languages I use.

I disagree

I think truly great concepts have both depth and breadth. And by that I mean that you can be introduced to the concept through a simple canonical example, but get returns on it through subtleties of the mechanism. For instance, I happen to think that the Iterator is perhaps the most important concept in imperative programming. It's about as close to a universal concept as map/fold/filter, etc. Any novice programmer can look at a simple instance of an iterator and more or less instantly see how it works, if not fully appreciate its value. And on higher levels of abstraction, iterators can perform a bewildering variety of operations that make their usefulness quite powerful, even though the novice would not think of them all right away.

On the other hand, there is no killer canonical example of a monad that programmers universally recognize as justifying their existence. Everyone has a different starting example, probably because they are not entirely satisfied with the other ones they have encountered.

And perhaps the problem is that *nobody* really understands what a monad is. After all, Richard Feynman reportedly said something to the effect that if you can't explain something to a five year old, you don't know what you're talking about. Granted, you aren't going to be able to explain the subtleties of QM to most five year olds no matter how many Nobel Physics prizes you have won, but there is a grain of truth to the saying, I believe.

As far as the existence of complicated mathematical objects goes, I agree that there are many such things that will never have a simple explanation (like most things involved with the Millenium Problems, for instance). However, I would argue that that is exactly what makes them *impractical*. Only mathematicians know and care about many of them. But monads are purportedly not just for theoreticians and academics. They are supposed to be used by the common programmer.

And when I called them a "desperate" solution, what I mean is that perhaps languages simply don't need to be as pure as Haskell to be efficient and elegant. That's also what I was implying by saying "There must be a better way". The fact is, when it comes to the dirty world of GUIs and databases, state abounds. There is no getting around it. A database is *all about state*. So I doubt very many GUI coders are going to give up simple imperative code to build their GUIs with monads. I just don't see it happening. The cost of abstraction is too high. There's not enough return on investment. Also, the dirty world of GUIs and databases happens to be a pretty good deal of the computing landscape.

Instead of finding a way to improve the interaction of stateful and functional code, theorists have made state itself a function. That's what I mean by "desperate". ;) I see the analogous situation being imperative theorists reducing functions to state tables. ;>>

When I use a monad to write i

When I use a monad to write imperative-style code, the result is almost /exactly/ the same as if I'd written it in a rather pleasant and capable imperative language. The almost being that I need to use return or let for pure values. The only reason monads are any harder for a user to understand than "use do, this is assignment, use return or call another computation as the last thing" is that sometimes they let you code in something rather different from yet another imperative language (List and Parser monads being fun examples).

Understanding building new monads can indeed become deep stuff, but so's anything else with that power.

Definition

I don't agree with your usage of the word 'impractical'. In my view, something is impractical if there's no one that can put it in good use without much burden. By your definition, monads are impractical because the common programmer can't use them merrily. That is too restrictive , imho, akin to saying that sniper rifles are impractical because the common soldier can't be effective with them.

All haskell apps deal with states, and people who wrote them seem to keep on going, so I'd assume that for them it's practical to do so. I am not sure where you get the impression that They [monads] are supposed to be used by the common programmer, though (and I don't mean to sound elitist).

If you're writing in haskell,

If you're writing in haskell, you've gotta use them sooner or later. It's that or you can't write useful code because you've no IO. That said, monadic IO is pretty easy to work with once people stop scaring you.

slight correction

Feynman wasn't quite that harsh it was a college freshman not a five year old. The story was that he had listened to a lecture and after leaving ran into some students and told them that he had heard something cool and started explaining it to them. He then realized he couldn't and later told a colleague that the state of I think it was QCD wasn't where it needed to be since it couldn't be explained to freshman.

In most (all?) science people

In most (all?) science people make theories to explain the world. The theories are simplifications, abstractions if you will, there purpose is not to be True. They are just a method of predicting the reallity. It's quite likely that we humans work in a simlar way in our everyday life. Althougt more subconsius and not as strict. Everything we see is an abstraction. These abstractions is not true or false,they are merely more or less good methods of predicting stuff.

I think you should use the same method for judging abstractions in programing. It is not a question of whatever the abstraction is True, Pure or exists in the real world. It's only a question about if it makes it easier to think about the problem at hand. Of course what the problem at hand actually is could be diffrent from diffrent perspectives making diffrent abstractions optimal.

Flip it, and I agree

Having recently learned OCaml I have found love, like Don Box has done with Scheme.

My intuition about this is that it's the opposite of yours. Functional programming should be the default approach, because it's the cleanest and simplest. Only when you really need to should you bring in that beastly state. In OCaml you have to do a little extra work to program with state. So whenever you use state in your code it stands out from the other code. I think this combination of functional and imperative programming is very good -- functional for the most part, imperative when it's really needed.

This approach also makes you think twice about using state, whereas in languages where the stateful model is the default you easily resort to state almost without thinking. I've seen this over and over again in the Java code I've maintained, and it quickly gets ugly. The problems seems to be that the use of state -- like those annoying dust bunnies -- tends to multiply and grow in the dark corners of your code. And every once in a while you might bring out the vacuum cleaner and refactor, Unfortunately the vacuum cleaner sometimes causes an electrical failure and the whole house catches fire, unless you have wired it up with unit tests covering every inch of the house of course.

Yeah, I get what you're sayin

Yeah, I get what you're saying about the vacuum cleaner. Writing Java for my research job, I live in eternal fear of having to refactor stateful code. Overuse of type declarations don't help to lessen the pain. Is there an inherent contradiction in the idea of an object oriented functional language? By this I mean a language that emphasises recursion and functional algorithms but passes around little stateful collections rather than values? What about a scheme whose closure property was not that everything was a procedure, but instead that everything was a collection of procedures with shared state? I guess that's what an object is in the non-CLOS-like OO languages.

There's no contradiction betw

There's no contradiction between object orientation and functional. Alan Kay wrote:

"Doing encapsulation right is a commitment not just to abstraction of state, but to eliminate state oriented metaphors from programming." — Alan Kay, Early History of Smaltalk.

My interpretation of that statement is that it doesn't necessarily mean that you program without state, but that manipulation of state is hidden within higher level abstractions that are manipulated in a declarative manner.

Language is not Syntax

Most of your comments on Ruby and Lisp are about the syntax. Or we all now here that while syntax does matters, it's far from being the most important point in a programming language.

A language design is divided into the following categories (please tell me if I forget some) :

  • the syntax
  • the semantics
  • the type system
  • the features

While the syntax is pure "style", each of them having its proponents, the semantics is independant from the syntax and describe how the language is evaluated.

At this point we can say that what makes Ruby is not only the syntax, which can be easily changed using a "translator" but its semantics which can hardly be changed without breaking a lot of things.

The type system assists the semantics since except Bash which have only one type "string", most of the languages have a more-or-less rich, more-or-less extensible type system featuring several "basic" types. The semantics of the language specify also if and how types are checked. Basicly dynamicly-typed languages have a type system, sometimes very rich, but the whole check is delayed at runtime. OTOH static type systems will check a lot of things at compile time and can (but not always do) eliminate a lot of checks at runtime. To be precise, some dynamicly typed language will add some checks at compile time that will enable them to skip some runtime checks if satisfied.

And then comes the features. Class systems are a possible feature when the language have objects in its type system. First class functions and closures are also features. Same for "generators" or continuations, same for embedded regular expressions (which are more a syntax feature - called syntactic "sugar"), and more...

All of theses things together are making a programming language, and syntax is far from being the most important point in language design since it's the most easy to change. However, it shouldn't be underestimated since a lot of programmers will judge a language from its syntax only before even starting playing with its semantics, type system, and features.

As a conclusion, it's funny to see that language "wars" are mostly focused on two points : syntax and type checking (not type system itself), maybe because "original" shift between Lisp and Fortran.

"Categories".

I'd state it differently: (In order of increasing abstraction.)
  • External representation. (Syntax.)
  • Internal representation. ("Parse tree".)
  • Typing system.
  • Computational model. ("Semantics".)

The "typing system" usually gets merged into the second or fourth layer, depending on whether you do static typing or not.

Yeah, I get it...

...but I just emphasized syntax because I had sort of grown to dismiss it after going from languages like Java and C++ to Scheme, because they seemed to use their syntax on ineffective approaches to problems. Why have special for loop syntax... just make the language support recursion. But with Ruby, I realized, wow, when the language doesn't rely on syntax to support non-intuitive assumptions, it can really assist my productivity. I have come to appreciate the aesthetics of programming languages more than I ever had. The regular, uniform nesting of S-expressions is almost unbeatably beautiful, but if it takes four or five lines of nesting to perform regular expression matches that I can bust out in one line in Ruby, then the pure approach begins to lose its appeal. So while I agree that syntax isn't everything, I think that syntax and its unity with deeper features can do a lot to determine one's experience. Syntax is the linguistic interface with the computer... feeling comfortable and supported by that interface is definitely a plus.

syntax, language design and "better"

The regular, uniform nesting of S-expressions is almost unbeatably beautiful, but if it takes four or five lines of nesting to perform regular expression matches that I can bust out in one line in Ruby, then the pure approach begins to lose its appeal.

This is a syntax issue, so I guess the "purity" you mean is syntactical purity? You probably know that it's possible to define a regular expression syntax for scheme, using macros, that would allow you to write code very similar to your Ruby one-liners. I'm sure there are already Scheme libraries that do exactly that. So, is the advantage of Ruby that regex syntax comes with the core language? Maybe so, but that's a function of the objectives for what the language was designed. Ruby was designed as a scripting language, to be an alternative to both Perl and Python; so it would had to support regular expressions natively, by design. Scheme, on the other hand, was designed to be more a general-purpose language. Which one is better? Well, it depends on you goals, doesn't it?

Different languages occupy different points in the design space, as already noted. What will be better for you will depend on where you are in this space.

Is purity important? Experience seems to show that state should be minimized. I guess this is an indication that functional programming must be doing something right. Practical concerns may limit the amount of "purity" you are willing to maintain.

Nitpick

Actually, Bash can perform numerical operations. So some strings can be "typecast" to numeric types, if you will.

Ruby for a lexical semantics course?

at the risk of getting off topic, i'd be interested in hearing why a course on lexical semantics would use Ruby?
Is there any specific technical reason for this choice, or just subjective preference of the instructor? are there any course notes available for reference?
v.

There are multiple instructor

There are multiple instructors for the course, each with a really different approach. The first instructor is a statistical guy... he's having us massive amounts of newspaper text that has been "dependency parsed", meaning that all the words are linked to their syntactic governors with information regarding the relationship. So, for "John likes Mary", "John" and "Mary" would both be linked to "likes", in the subject and object relationship respectively. (Actually, they are linked to "like", which is the lemmatized form).

Using millions of words from parse trees like this, we are to build a database of features describing each word. A word's features are all the relationships it has with other words in the entire corpus of text. Using this knowledge and some statistical techniques like mutual information and clustering, we will derive sets of similar words and then eventually break words apart into word senses by relating those words that appear in similar context. It's an approach centered around Wittgenstein's idea that a word's meaning is based on its usage, on "the company it keeps".

Our instructor isn't forcing us to use any particular language, but he basically told us flat out to write it in Perl. I don't think he knows too much about less popular languages, since when I mentioned Scheme he didn't recognize it until I told him it was a dialect of Lisp, which he had only vaguely encountered. So he was more recommending Perl versus C++ or Java, mainly, I think, because he thought it would make completing the assignment easier.

In my previous free choice natural language course last spring I used Common Lisp once and then Scheme for the remainder, and I think they may have been part of the reason why I excelled. But because I want to get started with the Rails framework, I decided to use Ruby and get familiar with it... especially since learning Perl is very low on my list of priorities. If not for wanting to use Rails, I'd probably be using this as the occasion to learn O'Caml instead, because I could use the efficiency in the massive statistical processing that's required.

After reading some of the responses, I think that it's perhaps not an issue with functional languages that I have, but with overly "pure" ones. Ruby just felt so much more convenient to use than Scheme did last semester, and I think a lot of that is it's willingness to be a language for "right now" rather than a minimalist, beautiful language for the ages in which you are constantly forced to either reinvent the wheel, or adapt yourself to a particular implementation's reinvention of it. By comparison, Ruby is loaded (or cluttered) with features, but it organizes them well and manages to avoid flaws that make Java and company so obnoxious.

So maybe a good question is... what's a feature-rich functional language for which those features are well documented? Things like syntactic regular expression support are the sort I am talking about, features that make common tasks simpler. Maybe O'Caml is the answer... but I think that, so far, Ruby is much more convenient for the task at hand than Scheme would have been. It's the source of a lament I posted in another thread about yearning for a practical Scheme. For things like Essentials of Programming Languages and learning to write interpreters, for pure, perhaps academic tasks, Scheme seems unbeatable because it doesn't clutter the problem. But when I need something done, it has frustrated me, and I don't think it's because I'm not familiar enough with the language. I think it's because the language is not familiar enough with the things I need to do. That's not a knock... Scheme was never meant to be that language, and if it were, it wouldn't be Scheme and that would be sad. I guess what I want is a functional scripting language with an indulgent feature set.

Format whine [OT]

I find it very hard to read this post without paragraph breaks.

Then again, if you like the long paragraph style, there's no reason to pander to lazy readers like me.

Yeah... you're right. I start

Yeah... you're right. I start off with a mind to keeping it short but run my mouth longer than I thought all too often. I edited the comment and broke it into paragraphs.

It would seem as Common Lisp

It would seem as Common Lisp is the obvious choice. It has tons of features and is as close to scheme as any established language. Is there any particular reason why you don't like it?

Amusingly enough, one of the

Amusingly enough, one of the biggest reasons I ditched it for Scheme after seeing how much cleaner it felt to use one namespace for variables and procedures. So the very reasons I now have gripes with Scheme originally brought me to it. I was also really interested in explicit continuations, which CL doesn't support. But you might be right. I learned for the first few months on Common Lisp... maybe it's time to look at it again.

Yeah, those are really nice f

Yeah, those are really nice features of Scheme. I have thought about how dificult it would be to make Common Lisp behave in the same way. I think single namespace wouldn't be so hard. Just overload defun, setf and simillar to write to both namespaces.

Continuations is more difficullt, but I wonder why nobody has made a Common Lisp implementation with them.

For one because it is impossi

For one because it is impossible to say how they ought to work in the presence of unwind-protect and condition signalling.

I'm sorry but I can't see wha

I'm sorry but I can't see what the problem is. Especialy since I recently implimented a system with exactly those featrures. Would you care to explain? I wouldn't like my system to have any hidden inconsistencys as I supose thats wath you are talking about.

Here's an example

Take this piece of code for example:

(let ((r (acquire-resource)))
  (unwind-protect
    (progn
      (foo)
      (use-resource r))
    (release-resource r)))

Without continuations, FOO will either return, followed by (USE-RESOURCE R), or the stack will be unwound. Either way, the resource will be released.

With full continuations, as in Scheme, it's more complicated: FOO may return zero or more times. If FOO never returns, the resource is never released. On the other hand, if it returns more than once, (USE-RESOURCE R) will be called after (RELEASE-RESOURCE R) has been called.

Here's an example of a problematic FOO:

(define foo-continuation #f)
(define (foo)
  (call/cc (lambda (k)
             (set! foo-continuation k))))

The only way to prevent this is to ensure code inside UNWIND-PROTECT doesn't use CALL/CC, but with higher-order functions, this can be impossible.

Can't you just add a before c

Can't you just add a before clause (to unwind-protect) to be evaluated before you enter the protected area, and then evaluate the after clause thenever a continuation call makes the controlflow leave the protected area and evaluate the before clause, every time it jumps in to the area again?

That would at least garantee that the after clause is evaluated exactly one time after each time the before clause is evaluated and that the before clause never run twice without the after clause runnig inbetween.

Yes

That's what dynamic-wind does. It's not hard to imagine a situation where it would be worse than unwind-protect, though. (For example, iterating over the output of a Python-style generator would needlessly invoke the before and after clauses if the generator was called outside the dynamic-wind.)

Other solutions seem to put light restrictions on call/cc.

Here's a link

Here's a link describing the problem.