LtU Forum

The long tail of programming languages

Charles Simonyi's Intentional Software blog has an interesting piece by Magnus Christerson, on what he calls the long tail of programming languages. It seems that (not surprisingly) programming languages exhibit the same power law popularity distribution that has been observed in so many other areas. Christerson's entry devolves into more of a marketing spiel for Intentional's products by the end, but he makes some interesting observations on niche and domain-specific languages along the way. If nothing else, the plot of programming language popularity ranking is interesting (although I find it a little hard to believe that more people program in Postscript than in Ocaml).

From Perl Monks: "pissed off about functional programming"

On Perl Monks the most highly rated node (discussion topic) for the month is pissed off about functional programming in which the node's author "vents" about functional programming, focusing on four functional-programming "myths." I found it somewhat confus(ed|ing), but it's worth a read, if only to see an unusual perspective on FP's pros and cons.

R6RS Scheme Status Report

The R6RS committee has released a new status report. They've ratified a number of small additions (square brackets, multiline comments) and made progress on several fronts (Unicode support, exception handling). Unsurprisingly, the module system is still the trickiest issue.

(LtU discussion of the previous status report)

OO runtime graphs are scale-free

If you liked Barabási's book, you'll find this interesting. The May issue of CACM has an article called Scale-Free Geometry in OO Programs [PDF, 180K] by Alex Potanin, James Noble, Marcus Frean, and Robert Biddle. They examined 60 heap snapshots from 35 different Java programs and found all of the resulting object graphs to be scale-free. This is true for both incoming and outgoing links with the qualification that objects rich in incoming links are poor in outgoing links and vice versa.

What are the implications of this for PL research in OO languages? The authors point out the following:

  • One useful aspect of scale-free networks is their high resilience to random network damage. Since most nodes are poorly connected, deleting them doesn't affect the remaining nodes all that much.
  • A small number of nodes are highly connected. You may get the best bang for the buck by concentrating early QA effort on the highly connected nodes first.
  • "is is well-known that garbage collectors can improve their performance by assuming that most objects have only one or two outgoing references. The scale-free nature of object graphs explains why making this assumption is worhtwhile."
  • Any other interesting implications?

    Two papers on combinators

    Hello! I have a couple of papers on using combinators in Combinatory Logic (CL).

    I've just updated a paper I wrote some time ago that examines the derivation of a combinator that consumes its argument. "Meditations on the Void" is available at http://www.cotilliongroup.com/code/void-meditations.html.

    I have developed a new combinator, P (for Propositional combinator) that curries two arguments at linear cost (currying arguments using combinators usually have an exponential size cost); it is also useful for expressing propositions in CL. "Penguin" is available at http://www.cotilliongroup.com/code/penguin.html.

    More generally, my research is in programming languages using the predicate and lambda calculus. The main page for my explorations is at http://www.cotilliongroup.com/code/research.htm.

    Sincerely,
    Doug Auclair

    expressivity of lisp/scheme but speed of assembly/C/C++

    I have recently found scheme to be very expressive language that allows me to not only quickly implement my project, but also to extend it easily. I also need my application to be as fast as possible (its basically a database application, as mentioned in an earlier thread). From what I gather, scheme/lisp implementation are far slower than C or C++ (even OCaml seems very fast). I've seen benchmarks where even scheme-to-c ends up being several times slower than c++.

    Now my question is this: is it at all possible to bring the performance of scheme closer to c++?

    I would think that lisp/scheme would actually be easy to optimize. Could they be optimized further if haskell like typing was introduced (I think Qi does that)...more information available to the compiler, better it can work!

    If there are some things intrinsic to lisp/scheme (perhaps eval?) which keeps performance low, then I'll have to live with some performnace upper limit. Otherwise, why don't we see faster implementations? Is it because those who use lisp/scheme just don't have a need for performance?

    I don't even mind if current performance limits exist simply because no one has written an aggressive enough compiler. I can prototype my application...then start work on a compiler.

    Implementing a Lisp Editor - Would like some advice.

    I wasn't too sure if this was the best place to come with a question like this, but I don't have many (ok any) contacts in the Lisp (CL or Scheme) world that are as well informed and opinionated as the readers of this site.

    I have read many times that the programming environments of the Symbolics machines were some of the best environments to work in when programming in Lisp. To my surprise, I haven't seen any modern environments attempt (or currently attempting) to reach this level of productivity. Or if they are attempting there is a magic missing from them. But more importantly I haven't seen any commercial company attempting the development of such an editor.

    Now mind you my experience with these environments is less than 'nil considering the fact that the only reference I have to go off of are the Rainer videos. I feel like a kid trying to experience the true magic of the Wild West by only watching John Wayne movies. But I can deal with the lack of experience, but a commercial project cannot not be done without the voice of experience and/or customers. This is where some help you the readers of this site can possibly help me, if willing.

    I have two problems, one is the runtime. My first instinct jumps to mzscheme, because of its rather large install base across universities. I truly enjoy programming in Scheme, and see many competitors in the CL market that I am not to sure I could contend with. So my first question, with consideration from a strictly pragmatic viewpoint, is which environment is better for the user, or rather which language is better suited for what job. I know people have their preferences, but I guess I would like to know why they lean one way or the other. Or if both are used, which language is used for which types of jobs.

    The second question concerns the graphical front end (more appropriately the 'product' piggy-backing on the language). I was enchanted with the graphical environment of the Symbolics machines, and just recently discovered that this set of widgets was a standard called CLIM. I also recently found out that McCLIM is a nice project that is working to create a LLGPL version of the standard.

    As you can imagine finding this only put more decisions into my head. The choice was easy when I was only dealing with MrEd. I am looking for recommendations as to which road to travel, or rather to present some options I may not be seeing. On one side I feel that I could use MrEd, and strictly develop the entire environment in Scheme using its libraries. I am sure that I could use the FFI to call CL environments if the customers wished us to offer a CL version of the editor. My concern (due to lack of experience with MrEd) is that I won't be able to get the same 'Symbolics magic' from MrEd that may be possible with CLIM.
    On the other hand, it is also possible to develop the editor using CLIM and CL, and use the CL's FFI implementation to call mzscheme to evaluate Scheme code. Both are viable solutions, but I see a bigger problem.

    The Lisp family of languages grants much power to its user on a metalevel. This was the main reason for me wanting such an environment; no IDE developer has been able to guess every need I have (hell I don't even know until I know) so I wanted a programmable environment like those of the past (the ones everyone seems to rave about). The bigger problem I mentioned above, when using the combined language is approach, is that potential loss of programmability. If the editor is in CL and I am only offering a Scheme product, then its possible that the editor wouldn't be programmable (without a CL environment thrown in as well) which defeats the purpose of working on such an editor.

    I do apologize for the length of this posting and if the content is not suited towards this site. Thank you for taking the time to read it, and I appreciate any constructive comments or advice you may have.

    Best Regards,

    MJ Stahl

    Restructuring Partitioned Normal Form Relations Without Information Loss

    By itself, this paper is not related to PLT. However, it is interesting to observe how an idea of "types up to isomorphism" manifests itself in different communities - called data equivalence of (nested) relational schemes in this paper.

    I wonder, how reformulation of the paper in CT terms would look like, and whether it will be data-equivalent to the original :-)

    Restructuring Partitioned Normal Form Relations Without Information Loss

    PS: I hope this is not too much off-topic.

    Virtual Machine and Runtime Framework

    Hello,

    I'm currently designing an intermediate programming language, with its virtual machine and compiler. The goal is to provide language designers a common runtime framework easy to target, well optimized, and a good set of librairies.

    The language is designed to support a wide kind of semantics and type systems (static and dynamic) and offers all together imperative, OO, and functional features. It might be quite easy to generate from your favorite language to this intermediate language. This way, you can use either a safe small and embedable VM or a full speed JIT to run the code.

    For reusing the libraries, you might have to write some wrappers for your language semantics but that's still less work than doing C/C++ FFI.

    I'm planing to release soon the full specifications of the language with an OCaml toy interpreter, and will welcome any comments. VM and JIT will follow later.

    But before that, what do you think about this project ? Will you feel like using this runtime system ? and what kind of features are you excepting to be in the specs ?

    Sapir-Whorf again?

    While reading Stanford Encyclopedia of Philosophy's article on Modal Logic, I stumbled upon a phrase:

    The answer is that there is a dangerous ambiguity in the English interpretation of [...a formula follows...]


    In English, ‘necessarily’ is an adverb, and since adverbs are usually placed near verbs, we have no natural way to indicate whether the modal operator applies to the whole conditional, or to its consequent.

    Native English speakers, do you agree?

    XML feed