LtU Forum

Journal "Logical Methods in CS"

From the Proof Theory mailing list: the Logical Methods in Computer Science journal (LMCS) has been launched, with the message from the editors:

Dear Colleague:

We are writing to inform you about the progress of the open-access,
online journal "Logical Methods in Computer Science," which has recently
benefited from a freshly designed web site, see:

                   http://www.lmcs-online.org

In the first year of its existence, the journal received 75 submissions:
21 were accepted and 22 declined (the rest are still in the editorial
process). The first issue is complete, and we anticipate
that will be three in all by the end of the calendar year. Our eventual
aim is to publish four issues per year. We also publish Special Issues:
to date, three are in progress, devoted to selected papers from LICS
2004, CAV 2005 and LICS 2005.

The average turn-around from submission to publication has been
7 months. This comprises a thorough refereeing and revision process:
every submission is refereed in the  normal way by two or more
referees, who apply high standards of quality.

We would encourage you to submit your best papers to Logical Methods in
Computer Science, and to encourage your colleagues to do so too.
There is a flier and a leaflet containing basic information about the
new journal on the homepage; we would appreciate your posting
and distributing them, or otherwise publicising the journal. We would
also appreciate any suggestions you may have on how we may improve the
journal.

Yours Sincerely,

Dana S. Scott (editor-in-chief)
Gordon D. Plotkin and Moshe Y. Vardi (managing editors)
Jiri Adamek (executive editor)

PS. I would have posted this to the front page, but these privileges have been lost, no doubt for incalcitrant inactivity...

Microsoft Command Shell (msh)

A guided tour of the Microsoft Command Shell.

(Via Mike Newhall over at ll-discuss.)

Why Lists?

I just want to ask the rather naive question: Why are lists the most popular builtin aggregate data structure in FP langs? I understand the historical precedent for lists, and I understand that lists are very well understood. But really, there is nothing magical about lists, and computationally, they aren't particularly cheap. It seems to me that the basic interface lists provide that FP consumes is head(), rest(), append(), and splice(). There may be a few others, but my point is that there are lots of data structures that can support this interface. So why not make the interface intrinsic, rather than the data structure, and support lots of different structures intrisically (arrays and trees come to mind)?

incremental algorithm help

I hope someone can point me to literature regarding theories of 'incremental' calculation. Many people here will know about folds or catamorphisms and how they help with computations on lists. I am trying to understand the theories of similar calculations when items are added to lists. For example, it is one thing to calculate a sum of integers in a list, but what about returning the sum each time a new integer is added? Or to make it a little more tricky, how does one effeciently (and generically) calculate an average of integers in a list...when integers are being constantly being added?

Actually I found this paper: Incremental algorithms on lists by Johan Jeuring, it has a chapter on incremental algorithms...but I can't understand one word. I didn't understand Functional Programming with Bananas, Lenses, Envelopes, and Barbed Wire until I read some introductory material on foldr (I still don't understand it completely). I'm hoping there are more basic references which will help me understand incremental algorithms (or would it be right to call them incremental folds). Thanks!

Terminology proposal

From a comment:
(Likewise it is ridiculous to say Lisp has no syntax--it certainly does; just a rather simple one).

I think even this is too much. Lisp's true syntax is anything but simple, and of course in fact it varies from Lisp to Lisp. I propose that we use the phrase latent syntax to describe the Lisp approach to syntax, by analogy with latent types.

The idea, of course, is that Lisp has a very trivial syntactic structure at one level, but there's an entire world of syntactic constraints that are latent in the surface syntax, but that are clearly syntax rather than semantics and which the programmer (and many analysis tools) must be aware of and manage. Note all the things in R5RS that are labeled as "syntax" or "derived syntax"...

Does this distinction make sense? I really like the phrase latent syntax to describe this, but maybe there's something better? There's abstract syntax, of course, but I'm not sure that's really the same thing. For instance, C and Java have concrete and abstract syntax, but neither of them really has latent syntax. I'm not sure what the opposite would be... manifest syntax, maybe?

Obviously this applies to XML as well. Is there a standard terminology in that community? "Schema" isn't particularly useful...

Bruce Tate: Technologies that may challenge Java

An article by Bruce Tate about a few 'technologies' that may challenge Java. The first one is 'Dynamic Languages.' He uses one example where two variables are operated upon and assigned in a single line: x1, x2 = x2, x1+x2, in contrast to Java approach which takes more lines. He also says "With Ruby, types are dynamic, so you don't have to declare them." If I am not mistaken, both can be done (and are done) in statically typed language. 'Single line, multiple assignments' could be done using tuples and type inference allows one to avoid declaring types.

He also mentions continuation servers, meta programming and convention over configuration. As an aside, while describing continuations, he writes: "Continuations are language constructs that let you quickly store the state of a thread, and execute that thread later." As far as I know, continuation simply allow an explicit return address of a function/procedure to be provided (rather than the default).

Selfish reference begging

Hi,
I am looking everywhere for the proofs of the correctness and completeness of the Hindley-Milner type inference algorithm.

I could only find this short paper,
Principal type-schemes for functional program, by Damas and Milner, which gives only sketches of the proofs, and inform the reader that the detailed proofs are in Damas's Ph.D. thesis, titled Type Assignment in Programming Language, that I cannot find anywhere.

I would be grateful if anyone could point me to these proofs.

R.K.

ABLE

I came across this link while searching for programming languages related to accounting: http://en.wikipedia.org/wiki/ABLE_programming_language.

Unfortunately the link is a bit of a dead-end. None of the Wikipedia links lead anywhere useful.

Has anyone every heard of ABLE before? Or better yet programmed in it?

Does anyone have links to sites that might reference this lanuage? Google seems to know nothing about ABLE.

Back to my original search, are there any languages you are aware of that are focused on accounting?

Best default sequence?

Most languages have a default concrete data type for collections (arrays in C and Perl, lists in Lisp and Haskell, etc.) that get used more often then they probably should. By default, I mean languages generally have better syntax for a particular data type that gets used most often (or better pattern matching, or better names for constructors (think ":" and "[]" or "cadr") or historical reasons or predominance in tutorial material, etc.). But for new languages, what criteria should we use when choosing an appropriate default? For a new functional language, I'm thinking about using catenable deques (with O(1) append, cons, and snoc) or democratic sequences (or some other balanced tree with O(lg n) complexity for almost all operations). Is there something out there that might make for an even better default? Or are the current batch of defaults already close enough to optimal?

(And please feel free to opine about different collection defaults in imperative and logic languages as well).

Abstraction

Just noticed that Joel of Joel on Software wrote-quoting I think- at the end of his latest blog entry: 'Google uses Baysian filtering the way Microsoft uses the IF statement'
he follows this with an example of his own citing the difference in approaches to spell-checking.
Interesting point and one I hope other readers will enjoy.
Stephen

XML feed