The History of T
started 12/12/2001; 9:24:55 AM - last post 12/19/2001; 11:22:29 PM
|
|
Bryn Keller - The History of T
12/12/2001; 9:24:55 AM (reads: 7024, responses: 81)
|
|
The History of T |
A fascinating recollection of the development of T (an early Scheme implementation) from someone (Olin Shivers, author of scsh) who was there.
Posted to history by Bryn Keller on 12/12/01; 9:26:43 AM
|
|
|
|
Ehud Lamm - Re: The History of T
12/12/2001; 12:08:05 PM (reads: 5888, responses: 0)
|
|
This is a real treat. Behind the scenes of some amazingly cool work.
|
|
Frank Atanassow - Re: The History of T
12/12/2001; 2:04:23 PM (reads: 5879, responses: 0)
|
|
Fascinating. In fact, that archive (ll1-discuss) has many other interesting posts (though, pointy-headed theorist that I am, I find I cannot agree with many of them :).
|
|
Bryn Keller - Re: The History of T
12/12/2001; 3:26:12 PM (reads: 5856, responses: 0)
|
|
Okay, Frank, I'll bite. :-) What are the top five things you disagree with in that archive and why? Let's get some theoretical perspective!
|
|
Ehud Lamm - Re: The History of T
12/13/2001; 2:38:53 AM (reads: 5858, responses: 0)
|
|
Yap. This is a very pro-theory site.
|
|
Frank Atanassow - Re: The History of T
12/13/2001; 5:44:41 PM (reads: 5861, responses: 0)
|
|
Of course I haven't read the whole archive, so I can't give a global "top
five", but, since you've taken me to task, here are four quotes which elicited
strong reactions from me:
(BTW, I'm not sure how much my objections have to do with my being a
theoretician; I added that bit because Olin Shivers took a few friendly jabs at
us in his message.)
from http://www.ai.mit.edu/%7Egregs/ll1-discuss-archive-html/msg00361.html
> In this case, having a few useful builtin/standard datatypes or
> collections helps me write a small program quickly.
The implicit claim here is that having many built-in datatypes is good. I think it is bad. When I see a language like this, I think, "This language addresses the symptoms, not the problem."
The need to build in datatypes reveals a deficiency in the expressiveness of a
programming languages. More expressive languages allow user-defined datatypes
to be behave more like built-in (first-class?) types.
You can recognize this very clearly if you are familiar with a few typed
functional calculi. For example, in early versions of (LCF) ML, there was no
`datatype' declaration; you could only define datatypes via recursive type
equations, and so the coproduct and product type operators had to be built
in. In Standard ML, coproducts can be defined using `datatype', i.e., they got
subsumed by a more general construction, but product is still primitive. In a
language like Charity, there is a declaration form for `codatatypes' which lets
you define not only products, but also the function space type. In polymorphic
lambda-calculus, you can do away with recursive type definitions altogether,
since they are definable using impredicative polymorphism. You still need the
function space operator, though; in pure type systems, even that disappears,
and all you have is the dependent product binder.
Now, to get really abstract: This pattern of things at the object
level being subsumed by constructions at the meta-level is more than a
coincidence. It is an instance of a free construction across semantic
dimensions which can actually be formalized and, indeed, it is one of the
things which forms the foundations for the programming language I am
designing.
I admit that this idea is a little bit different from what the poster had
in mind, since he was talking about a dynamically typed language. But, see my
rant on languages vs. libraries below.
from http://www.ai.mit.edu/%7Egregs/ll1-discuss-archive-html/msg00430.html
Shriram Krishnamurthi gave a great rebuttal to this too but, darnit, I have
axes to grind! ;)
> Or , to put it as simply and briefly as possible, in answer to "Why
> is Python considered lightweight and Lisp is not" , there are three
> reasons.
>
> 1. Syntax.
>
> 2. Syntax.
>
> and
>
> 3. Syntax.
>
> Syntax matters. It matters a lot. Academic language designers seem
> to be most concerned with semantics, completeness, doing the right
> thing, etc; but not as much concerned with syntax.
Boy, if I had a nickel for every time... grumble, moan. Is syntax the last
refuge of linguistic luddites?
This poster claims syntax matters a lot. It sure seems to matter a lot to the
poster. I suspect the reason for that is that the only major differences
between the languages he has used are syntactic differences.
Needless to say, I don't think syntax matters much. As with anything, if you
take it to extremes you will succeed in making it a burden (as the designers of
XSL must have intended!), but for most people this issue was settled long ago,
once they figured out how to read and write BNF grammars. Once you can do that,
you know that you can write many parser frontends for the same language, each
using any number of different syntaxes.
One might even go so far as to say that syntax is not part of a `language' at all: if you change the syntax but not the semantics, there is a strong sense that the `language' has not changed; on the other hand, if you change the semantics but not the syntax, then it seems more like a different `language'. I guess even Perl people, who are certainly among the staunchest syntax evangelists in existence, must recognize this fact, since they plan to allow Perl 6 to have several different concrete syntaxes, though they still call it Perl 6.
There is no perfect syntax. A good syntax for a language is one which is
optimized for the language semantics. Even among optimal solutions, there are
usually several representations which are equally good, or make balanced
trade-offs. (Example: cons and snoc lists.)
I haven't yet met a language whose syntax issues overwhelmed its semantic
issues. I have used Scheme, C, ML, Haskell, Prolog, Perl and many other
languages, and I never found the syntax to be even a quarter as important as
the semantics. Maybe XSL is an exception, but it is such an extreme I don't
even count it---though I feel compelled to keep mentioning it! Anyway, if XSL
had much(, much, MUCH) more to offer me on the semantic side, I would probably
grin and bear it. If I had access to an Emacs, that is.
(The first person who now mentions Intercal, TECO or any of that other crap will
get a complimentary bitchslap.)
To put it pedantically, the relative importance of syntax and semantics is
analagous to the relative importance of manual optimization and algorithmic
optimization. Translating a fragment of your program into assembly for speed
will may buy you some constant factors (but, on today's machines, only if you
are especially clever), but improving your algorithms can get you exponential
increases in performance. I think it is the same with syntax and semantics.
If the syntax is so terrible it gets in your way, you can use a programmable
editor or a macro system or something. (That might even make XSL usable...!) If
that is still not enough, and you need some sort of metaprogramming device,
then something semantic is lacking in the language. After all, a
compiler is a metaprogramming device: it's a device which transforms a
semantic specification into a syntactic entity which is
directly executable, i.e., without further analysis.
Now, you can argue that for short and dirty one-off programs like the sort Perl
seems to be designed for, the tight-loop assembly optimization argument implies
that syntax becomes relatively more important. I can agree that that is true to
an extent, but again I think this is addressing the symptom, not the
problem. The problem is, why are you writing so many short and dirty one-off
programs? Programming is supposed to be about automation and reuse, isn't it?
Short, trivial programs like this tend to solve trivial problems; we should not
be writing trivial programs each time we run into a trivial problem. We should
be solving them once and for all, with comprehensive and uncompromising
solutions.
Hacks breed hacks. (Yes, you may quote me. :)
BTW, I should say that, while I think syntax is not interesting, metasyntax issues (parsers, for example) are. (In general, it seems meta-things are usually more interesting than the things themselves.)
Well, I could ramble on about this for ages, for example about why I don't
think LISP/Scheme's syntax gives any particular advantages for metaprogramming,
but there are other points to cover...
from http://www.ai.mit.edu/%7Egregs/ll1-discuss-archive-html/msg00311.html
> There will be no concrete definition of
> "lightweight language" but let's look at the creators intent. Did the
> creator optimize common micro operations like string interopolation,
> regular expression matching, basic arithmetic, networking libraries,
> graphic libraries and list processing? Or did they spend more effort on
> in-the-large features like exception handling, type declarations,
> interfaces, polymorphism and templates? If the former, it is a
> lightweight language. If the latter, it is a scalable language. If
> *both*, it is the language you want to choose.
Actually, I rather like this message (the lightweight vs. scalable perspective
is interesting), but what bugs is me is that the poster regards "string
interopolation, regular expression matching, basic arithmetic, networking
libraries, graphic libraries and list processing" as language issues. They are
not language issues; they are library issues. You can add these
capabilities to any reasonable language by writing a suitable library.
If you're crazy about Perl regexps, use the Perl regexp library. It's not
written in Perl; it's written in C. (What does that tell you about Perl?)
Are you wild about graphics programming? Hey, look no further: there's MSFC,
wxWindows, Tk, Gnome, KDE, Qt, Mesa, ... What do they have in common? They are
all libraries.
Need basic arithmetic? No problem; every modern processor comes equipped with
instructions which access its ALU and FPU. You don't even need a library here.
Want bignums? Get one of the exact real arithmetic packages.
Where are the language isssues? These packages (basic arithmetic excepted) are
all written in C or C++, not in high-level languages. There aren't any language
issues here, except inasmuch as the language influences how much abstraction
you can do in the libraries, and even there the library design issues tend to
dominate, because the library designers are restricted by the semantics of the
underlying C/C++ implementations.
If your approach to designing your language starts with one of these topics,
you are solving a non-problem.
from http://www.ai.mit.edu/%7Egregs/ll1-discuss-archive-html/msg00274.html
Here is one of the few places I disagree with Shriram. While trying to
formulate a definition of "lightweight language", he comes up with this
criterion:
3. I should be allowed to submit, but not run, buggy programs. That
is, for rapid prototyping, I should be able to write a program with
3 functions, A, B, and C; C can be buggy, but so long as I use A
and B only, I shouldn't be prevented from running the code. ML
fails at this. The ML-like type systems for Scheme succeed.
First, the claim I disagree with: by his definition, ML does not, in fact, fail
this criterion, since ML does type-checking, not `bug-checking'.
That's a minor quibble, though. What he is trying to get at here is that ML
type-checking disallows unbound variables, so you cannot leave a function
`unimplemented', even if it never gets called at runtime.
In Scheme (Shriram is a Schemer), on the contrary, you are allowed to do
this. For example, you can write
(define (f x) (+ y 1))
even if y is not in scope. If f ever gets called, you will get a runtime error
saying y is unbound.
This is one of the things I don't really like about Scheme, actually. It is
pretty easy, for example, for typos to give you runtime errors. In my example,
I could easily have meant to have written x, but have typed y by mistake.
In an ML-like language, perhaps you might reasonably assign the type 'false',
or a fully polymorphic type to an unbound variable, but forcing the programmer
to bind it to _something_ explicitly ensures that he doesn't completely forget
about it later.
Forcing him to bind it doesn't mean he has to write a whole implementation for
it, though. In Haskell, you can bind a variable to the bottom value (e.g., to
'error'). In ML, you could bind a value to something that raises an
exception. In both cases, the variable will be assigned a fully polymorphic
type (unless you constrain it with a type annotation).
(Of course, Shriram is trying to define the notion of "lightweight language",
not "good language", so I suppose it is not impossible that he might agree with
me here.)
Well, that's enough for tonight. Apologies for the typos, etc., which
undoubtedly exist; it's now 3am and I'm tired.
|
|
Fredrik Lundh - Re: The History of T
12/14/2001; 6:51:51 AM (reads: 5781, responses: 0)
|
|
> for most people this issue was settled long ago, once they figured out how to read and write BNF grammars
Except that in the real world, most people don't know how to read and write BNF grammars.
|
|
Bryn Keller - Re: The History of T
12/14/2001; 10:01:46 AM (reads: 5838, responses: 0)
|
|
> > for most people this issue was settled long ago, once they figured out how to read and write BNF grammars
> Except that in the real world, most people don't know how to read and write BNF grammars.
Not only that, but people who can, don't spend their time writing new frontends for existing languages.
I do think that the importance of syntax is often overrated, but I have to say I think it is important at some level, if only because of the "ergonomic" factors of programming: you have to type programs, and you have to read them. Like many professional programmers, I have some slight wrist/hand aches, so I tend to prefer syntax that doesn't involve many shift keys. My hands are noticably more sore after 8 hours of Java than after 8 hours of Python. So the prize for most comfortable language to type goes to Forth, but personally I find Forth hard to read. Reading is the other important part. Brevity is important, too (though this can be taken to cryptic extremes, which is not what I'm talking about).
As long as I've been programming, I've hated S-expression syntax. But I've always admired the simplicity, clarity, and power of Scheme. Recently I decided to just start writing in it, and never mind the syntax. Not too surprisingly, S-exp syntax has grown on me. I'm not a "convert", but I don't mind it anymore. So there's a point in favor of the syntax-doesn't-matter argument. I freely admit you can learn a new syntax faster than you can learn a new semantics. I'm just saying syntax is still important.
Another important aspect of syntax is that it's important for the user community. If I invent a new s-exp or Pythonic syntax for Java, I can't submit articles that use that syntax to trade journals, and (more importantly) I probably can't use that syntax at work, because it's not standard, and people will be attached to the standard.
Even if you make inventing new syntax easy, it doesn't happen in a useful way. Take Objective Caml for instance. This is the only language I'm aware of off the top of my head which ships with a preprocessor that allows you to completely redefine the syntax. Now, Ocaml has a somewhat (IMHO) warty syntax. I much prefer Haskell's. A lot of people dislike OCaml's syntax, including (I assume) the author of the preprocessor. He wrote a "Revised" syntax for OCaml, which is somewhat cleaner and more consistent. Most people don't use it. Examples don't use it. He also wrote an s-expr syntax for OCaml with the same tool. Number of examples of s-expr OCaml I have seen outside of that preprocessor grammar itself: zero.
So I think syntax does matter. I'm much more worried about semantics, but I think syntax has a great deal of impact on the "feel" of the language, the user community, and on reading and writing of programs. I don't think we're at the point yet where we can just say "syntax doesn't matter, it's simple to roll your own".
|
|
Bryn Keller - Re: The History of T
12/14/2001; 10:05:06 AM (reads: 5763, responses: 0)
|
|
> Well, I could ramble on about this for ages, for example about why I don't think LISP/Scheme's syntax gives any particular advantages for metaprogramming, but there are other points to cover...
Because it's basically a parse tree, and you can manipulate the parse tree for any syntax, or is there some other reason you have in mind?
|
|
Bryn Keller - Re: The History of T
12/14/2001; 10:16:33 AM (reads: 5758, responses: 0)
|
|
> If you're crazy about Perl regexps, use the Perl regexp library. It's not written in Perl; it's written in C. (What does that tell you about Perl?)
> Are you wild about graphics programming? Hey, look no further: there's MSFC, wxWindows, Tk, Gnome, KDE, Qt, Mesa, ... What do they have in common? They are all libraries.
I'll assume you were just tired for this one. :-) You're bashing Perl for using C libraries in one sentence, and then saying you should re-use all these great existing C/C++ libraries in the next sentence! I think you're right though that library and language (where language = semantics, or = semantics + syntax) often get confused. At least in Perl's case, that's because the syntax makes special provisions for using, e.g., regexps. The open question is: is this justifiable? I'm guessing you'd say no, and so would I. I'd really like to be able to define syntax that helps me use a particular library as macros, though, as one might do in Scheme, Dylan, or what have you.
|
|
Bryn Keller - Re: The History of T
12/14/2001; 10:34:20 AM (reads: 5756, responses: 0)
|
|
> Now, to get really abstract: This pattern of things at the object level
> being subsumed by constructions at the meta-level is more than a
> coincidence. It is an instance of a free construction across semantic
> dimensions which can actually be formalized and, indeed, it is one of
> the things which forms the foundations for the programming language I
> am designing.
Can you tell us more about this? Sounds interesting.
|
|
pixel - Re: The History of T
12/14/2001; 11:25:04 AM (reads: 5769, responses: 0)
|
|
They are not language issues; they
are library issues. You can add these capabilities to any reasonable language
by writing a suitable library.
Unless you have great flexibility over the syntax, this is false (see below)
If you're crazy about Perl regexps, use the Perl regexp library.
Alas this is not so simple. In many languages, "\d" and "d" are the same. The
result is:
- either you write "\\d", loosing the readability of regexps (which can be
hard to read already)
- either you preprocess your language
Library without sugar can be sour!
(i've
already
trolled about this on Ltu)
|
|
Frank Atanassow - Re: The History of T
12/14/2001; 11:42:03 AM (reads: 5777, responses: 0)
|
|
Hi Fredrik,
We got off on a bad start, so I'm going to try to choose my words carefully. (I really do want to have objective, civilized discussions, but I tend to take a few too many liberties here because it's a weblog.)
> for most people this issue was settled long ago, once they
> figured out how to read and write BNF grammars
Except that in the real world, most people don't know how to read and write BNF grammars.
First, BNF was published in 1960, and was used to describe the syntax of Algol 60. That was over 40 years ago. Is it unreasonable to expect a programming professional to know BNF?
Maybe Python does not target professional programmers. I have seen the CP4E (Computer Programming For Everyone) document, and I think it would be great if we could get more people programming, but let's try to structure their experiences in a reasonable way. You can't even write a program, unless you know what is a well-formed program, and what isn't, and the best way to do that is BNF.
Second, XML is not so far from BNF, and it seems to be quite popular.
Third, what is "the real world"? Is it the set of all programmers in industry (many, dare I say `most', of whom have no formal education in CS)? The set of all programmers, everywhere? Or the set of all people in the world?
I think "the real world" is a loaded phrase. (Thank you for not capitalizing it, BTW. :) It means what you want it to mean.
Fourth, BNF is for describing syntax, but the underlying principles are much more broadly applicable. For example, initial algebras are almost identical to BNF grammars, and an intuitive understanding of these is, I believe, a prerequisite for building non-trivial programs. By "intuitive" I mean that, though you may not know what an initial algebra is, someone is incapable of understanding initial algebras will never be able to write non-trivial programs. (OO programmers will be more comfortable with final coalgebras, but they are just dual to initial algebras, so it's no biggy.)
|
|
Frank Atanassow - Re: The History of T
12/14/2001; 12:24:50 PM (reads: 5848, responses: 0)
|
|
> > Except that in the real world, most people don't know how to
> > read and write BNF grammars.
[..]
> I do think that the importance of syntax is often overrated, but
> I have to say I think it is important at some level, if only
> because of the "ergonomic" factors of programming: you have to
> type programs, and you have to read them. Like many professional
> programmers, I have some slight wrist/hand aches, so I tend to
> prefer syntax that doesn't involve many shift keys. My hands are
> noticably more sore after 8 hours of Java than after 8 hours of
> Python.
Java is long-winded. No argument here! (I teach Java, BTW.) But is that because of the syntax? I don't agree there.
Operationally, there is not much difference between Java and Python. Both are garbage-collected, object-oriented, etc. The reason Java is long-winded is that it forces you to write over and over things which, at a more general level of abstraction, are instances of each other. If Java supported type inference and parametric polymorphism, I am sure that you would find it less verbose and more attractive. These are semantic features.
> Another important aspect of syntax is that it's important for the
> user community. If I invent a new s-exp or Pythonic syntax for
> Java, I can't submit articles that use that syntax to trade
> journals, and (more importantly) I probably can't use that syntax
> at work, because it's not standard, and people will be attached
> to the standard.
Sure, but I am talking about the relative and practical importance of syntax and semantics. I don't deny that people are attached to syntax; if I thought most people were comfortable with switching syntaxes on a dime, I would not have written my rant. What I am saying is that, in the end, 1) semantics is the more important factor, and 2) the domain in which it is more important is the readability and writeability of new, better programs.
> Even if you make inventing new syntax easy, it doesn't happen in
> a useful way. Take Objective Caml for instance. This is the only
> language I'm aware of off the top of my head which ships with a
> preprocessor that allows you to completely redefine the syntax.
Yes, I'm very familiar with Ocaml.
> Now, Ocaml has a somewhat (IMHO) warty syntax. I much prefer
> Haskell's. A lot of people dislike OCaml's syntax [..] Most people
> don't use [the preprocessor].
So you are claiming that, though it is widely acknowledged that Ocaml's syntax sucks, people still use it, despite the fact that it's easy to change it with the preprocessor?
To me, that sounds like an argument that syntax isn't very important. If syntax is so important, and Ocaml syntax sucks, why keep using it?
> So I think syntax does matter. I'm much more worried about
> semantics, but I think syntax has a great deal of impact on
> the "feel" of the language, the user community, and on reading
> and writing of programs.
Impact on the feel: I disagree. Scheme syntax is totally different from ML syntax, but I feel quite certain that the reason Scheme feels different to me from ML is not the parentheses, but rather the lack of static typing, algebraic datatypes, and other semantic components. I feel certain about this, because every time I start programming in Scheme now, the first thing I want to program with the macro system is a static type checker, and not a parser for an ML-like grammr.
Impact on the user community: As far as attracting people to the user community, I agree that it has a big effect, since most programmers seem to hate s-exps. But that is more prejudice and culture than an inherent fault of the syntax. So do you think that quantity of users is absolutely more important than quality of users?
Impact on readability/writeability: Only in extreme cases, as I mentioned in my original message.
> I don't think we're at the point yet where we can just
> say "syntax doesn't matter, it's simple to roll your own".
We're not, but we should be. I feel the biggest hurdle is that we should have tools which can deal easily and effectively with any (at least, LALR) syntax, but we don't. Text files are, unfortunately, still the common denominator. (I think the reason for this is that we are afraid that if our tools break, we won't be able to access data in structured files. XML is the compromise we made to address this: it's structured, but it's still text, so it has to be parsed every time you read it. If we had reliable programming languages which built reliable programs, we wouldn't have to store structured data as text, since our tools would be reliable.)
|
|
Frank Atanassow - Re: The History of T
12/14/2001; 12:43:24 PM (reads: 5723, responses: 0)
|
|
Well, I could ramble on about this for ages, for example about why I don't think LISP/Scheme's syntax gives any particular advantages for metaprogramming, but there are other points to cover...
Because it's basically a parse tree, and you can manipulate the parse tree for any syntax, or is there some other reason you have in mind?
A bit more than that. A (functional) language like ML, which has a more structured syntax, typically has algebraic datatypes, which are just enough to handle the additional richness of the syntax. The analogy I often think of is similar triangles: ML has as much semantic richness to handle its own syntax (which is typed, in a way), as LISP/Scheme have to handle their own syntax, which is untyped in a way.
|
|
Bryn Keller - Re: The History of T
12/14/2001; 2:42:19 PM (reads: 5724, responses: 1)
|
|
Java is long-winded. No argument here! (I teach Java, BTW.) But is that because of the syntax? I don't agree there.
[..] The reason Java is long-winded is that it forces you to write over and over things which, at a more general level of abstraction, are instances of each other. If Java supported type inference and parametric polymorphism, I am sure that you would find it less verbose and more attractive. These are semantic features.
I'm stunned. I've been thinking of Java's wordiness as syntactic obtuseness, but that's just sloppy thinking - you're absolutely right, it's a lack of semantic features that's biting me, not syntax. Hrmph.
[..]What I am saying is that, in the end, 1) semantics is the more important factor, and 2) the domain in which it is more important is the readability and writeability of new, better programs.
Yes! Just so long as we don't dismiss (2) out of hand.
To me, that sounds like an argument that syntax isn't very important. If syntax is so important, and Ocaml syntax sucks, why keep using it?
Well, yes and no. People will still use it. But would they enjoy their work more if they didn't have to fight with the syntax? I think so. People will subject themselves to all sorts of abuse from computers, and programmers have a much higher tolerance for this pain than normal folks. But I think we owe it to ourselves to improve our lot.
Impact on the feel: I disagree. Scheme syntax is totally different from ML syntax, but I feel quite certain that the reason Scheme feels different to me from ML is not the parentheses, but rather the lack of static typing, algebraic datatypes, and other semantic components. I feel certain about this, because every time I start programming in Scheme now, the first thing I want to program with the macro system is a static type checker, and not a parser for an ML-like grammr.
Mmm, not quite the feel I'm talking about. I mean feel in the sense, "how do you feel after getting an eyeful of source code in this language"? If just looking at the code fatigues you, this is an important user interface problem. For instance, Dylan is an interesting language to me, it gets a lot of things right. But Dylan source code for some reason tends to have lines that are rather too long (mostly due to long naming conventions and especially to long type information), so my eye (at least) slides right over it without absorbing much. It takes a lot of concentration, because it doesn't scan well. Python, on the other hand, has very nice block indentation which describes the structure of the program visually, and it scans well because the lines tend to stay fairly short. I maintain that readability and writability is an important factor, and too-often overlooked.
> I don't think we're at the point yet where we can just
> say "syntax doesn't matter, it's simple to roll your own".
We're not, but we should be.
Yup.
|
|
Anton van Straaten - Re: The History of T
12/15/2001; 12:44:48 AM (reads: 5699, responses: 0)
|
|
So you are claiming that, though it is widely acknowledged that Ocaml's syntax sucks, people still use it, despite the fact that it's easy to change it with the preprocessor?
To me, that sounds like an argument that syntax isn't very important. If syntax is so important, and Ocaml syntax sucks, why keep using it?
Because the people who are using it were not put off by its syntax in the first place. I'll confess to being put off by Ocaml's syntax; and I'm pretty flexible about languages, having learned and worked in many, both imperative & functional. Another language whose syntax I never liked, but whose semantics I loved (back in the day), was Smalltalk. In both cases, that stopped me from using those languages seriously, because I just didn't enjoy the experience of writing and reading code in those languages.
In my experience, syntax is even more important to "average" programmers than it is to me and other good programmers. And yes, it's "unreasonable to expect a programming professional to know BNF", at least in today's world, and assuming you define a programming professional as someone who's paid to write programs. I'm not defending that state of affairs - I love meta-things as much as anyone, and enjoy writing little languages using parser generators driven by formal grammars - but the sorry truth is that there are a lot of "professional programmers" out there who have picked up everything they know as they go along, starting e.g. with something like Perl in high school, to use an example of someone I worked with recently.
Also, rolling your own syntax is a long way from being viable, even if it's easy to do technically. A big reason that a language like Java succeeds, despite its many and painfully obvious flaws, is that all of the tools that are available for it, such as editing environments and documenting tools. Custom syntaxes would have to be handled by such tools. A big part of the reason that "mainstream" languages are so primitive today, is that it makes it easier on the tools, and doesn't require an enormous standardization effort in order to be able to support custom everything.
So while the pointy-headed theorizing is wonderful, and I'm a big fan of it myself, some of what's being said here is classic "ivory tower" - it simply dismisses what's really going on down on the ground, where the people live in mud huts, and those who can work ivory are prohibitively expensive. Besides, the ivory-workers really don't want to be working amongst the mud huts (unless perhaps an IPO is in the offing), and would rather be in that cozy tower with the awe-inspiring 1000-foot view (and tenure).
The trickle-down effect in the language space is incredibly slow, with a timeline of decades for a given set of features (perhaps accelerating in the Internet era). But it's only been 40-odd years since the first "high level" language was invented, and there's plenty of stuff that still has to trickle. Incontrovertible proof of this can be found in the fact that the entire concept of functional programming has virtually *zero* presence in the commercial world, some few special-case exceptions like Erlang notwithstanding.
While I'm rambling, I'll add one more thing: languages like Perl have something to add to the mix here, in terms of languages that work the way non-mathematical humans expect. Functional programming and languages derived from formalisms such as the lambda calculus are a huge leap forward, compared to what came before; but they only address one side of the equation. The other side won't be addressed by mathematicians or mathematical computer scientists, especially not if they think "syntax doesn't matter". They'll be addressed by people who are thinking about other, less uncompromising things, in the domain of what humans are comfortable with, rather than what theory dictates.
I consider Perl's success evidence of my last claim - a language that superficially eschews many traditional and formal approaches, yet provides features integrated into the syntax of the language in untraditional ways, that ordinary users find useful and usable. But this human factor is still largely unexplored, and I think it's quite possible that the future may very well be custom syntax layers built on top of a functional, intermediate-language core. (The JVM and .NET come to mind as early precursors of this approach) We still seem to be a long way from that, though.
|
|
Frank Atanassow - Re: The History of T
12/15/2001; 3:57:05 AM (reads: 5707, responses: 1)
|
|
If you're crazy about Perl regexps, use the Perl regexp library. It's not written in Perl; it's written in C.
(What does that tell you about Perl?)
Are you wild about graphics programming? Hey, look no further: there's MSFC, wxWindows, Tk, Gnome, KDE, Qt, Mesa, ... What do they have in common? They are all libraries.
I'll assume you were just tired for this one. :-) You're bashing Perl for using C libraries in one sentence, and then saying you should re-use all these great existing C/C++ libraries in the next sentence!
You are generous. :) Yes, I guess I was being unfair to Perl.
Part of what was bugging me was that I think useful programming languages ought to be able to implement their own runtimes. Perl can't do this, but neither can ML. (Some peeople disagree that you can't, for example, write a garbage collector in ML, though.)
I think you're right though that library and language (where language = semantics, or = semantics + syntax) often get confused. At least in Perl's case, that's because the syntax makes special provisions for using, e.g., regexps. The open question is: is this justifiable? I'm guessing you'd say no, and so would I.
What I don't like is that languages like Perl make specific provisions in the language for specific external libraries and applications. Though it certainly helps when you want to use that library, or build an application of that type, it makes the language bigger, more complex, more implementation-dependent and no more useful for applications outside that narrow area. This is a bad tradeoff.
I'd really like to be able to define syntax that helps me use a particular library as macros, though, as one might do in Scheme, Dylan, or what have you.
The general case of this is domain-specific languages and language embedding. For example, theorem-provers and proof assistants need to do this. For those sorts of things, a good syntax extension mechanism certainly has its place. But it really takes a lot of work to make it clean. Scheme's macros are better than most in this respect because they are hygienic, but I think they are still quite limited.
Anyway, I have found that combinator libraries in Haskell give you most of the benefits that syntax extensions would, plus many more because you can still use the facilities of Haskell itself to do abstraction. A good example is parser combinators: they are far more powerful than regular expressions, plus you can write new combinators, give them a name and pretend they are primitive; you can even create parsers dynamically. Here at Utrecht, we have two parser combinator libraries (Parsec and UU_Parsing), and they are both quite efficient and fast.
|
|
Frank Atanassow - Re: The History of T
12/15/2001; 3:59:24 AM (reads: 5669, responses: 0)
|
|
Now, to get really abstract: This pattern of things at the object level being subsumed by constructions
at the meta-level is more than a coincidence. It is an instance of a free construction across semantic
dimensions which can actually be formalized and, indeed, it is one of the things which forms the
foundations for the programming language I am designing.
Can you tell us more about this? Sounds interesting.
Happy to. I started writing a post about it, but it got long and involved. Give me a few days and I will either post something here, or on my web page and post the link.
|
|
Frank Atanassow - Re: The History of T
12/15/2001; 6:01:04 AM (reads: 5700, responses: 0)
|
|
So while the pointy-headed theorizing is wonderful, and I'm a big fan of it myself, some of what's being said here is classic "ivory tower" - it simply dismisses what's really going on down on the ground, where
the people live in mud huts, and those who can work ivory are prohibitively expensive. Besides, the ivory-workers really don't want to be working amongst the mud huts (unless perhaps an IPO is in the
offing), and would rather be in that cozy tower with the awe-inspiring 1000-foot view (and tenure).
Hey, thanks for pigeonholing me so quickly.
In my experience, syntax is even more important to "average" programmers than it is to me and other good programmers.
No argument there! The point is not whether it is more important to them, but whether that fact is an obstacle to becoming "above-average" programmers.
And yes, it's "unreasonable to expect a programming professional to know BNF", at least in today's world, and assuming you define a programming professional as someone who's paid to write programs.
No, I was thinking of a `professional' as a person who is competent and in command of his field, not just any bozo who knows how to use Microsoft FrontPage.
I know that there are lots of people paid to be programmers who were never educated as programmers. I worked in industry for five years, at a company (not a dot com IPO, BTW) with about 20 programmers, and of them I was the only person who had a degree in computer science, rather than, say, electrical engineering. I was shocked when I discovered that no one knew what I was talking about when I mentioned `big O' notation or algorithmic complexity.
If we want to turn programming into a discipline analagous to engineering, something which produces reliable products in a regular fashion, and not just by shooting in the dark, then we have to create a professional workforce. The most effective way to create professionals is to educate and train them in an academic setting.
Maybe you find that elitist and unfair to all the people whose God-given right it is to grab a piece of the IT dream but, hey, let's not barricade ourselves in our ivory towers and ignore "The Real World", eh?
If you wanted to build a skyscraper, would you hire some guy just because he knows how to use the latest jackhammer model, or a would you hire a structural engineer, who justifies his designs with detailed diagrams, keeps up on the materials science literature, can prove scientifically that the structure is safe and reliable, and has a degree to show he is qualified?
I'm not defending that state of affairs
You're not trying to change it either, though, are you?
No doubt some of my goals will prove unrealistic, but I will find out which ones they are sooner or later, and perhaps I will make a difference before I die. But it won't be by telling other people what the status quo is.
Also, rolling your own syntax is a long way from being viable, even if it's easy to do technically.
I don't advocate rolling your own syntax. Why would I do that? I've been saying that syntax is largely irrelevant.
On the other hand, if syntax is largely irrelevant, then factoring it out of an application, allowing it to vary, and making it easy to change makes sense. That current tools don't support this is a problem, admittedly, and is one of the things that would need to change to make that feasible. Isn't something like this happening already, with XML?
But it's only been 40-odd years since the first "high
level" language was invented, and there's plenty of stuff that still has to trickle. Incontrovertible proof of
this can be found in the fact that the entire concept of functional programming has virtually *zero* presence
in the commercial world, some few special-case exceptions like Erlang notwithstanding.
A couple of years ago I would have agreed with this, but not anymore. Several archetypal features of functional programming are becoming more and more widespread. Both garbage collection and closures (or some semblance thereof) are available in Java, Perl, Python and Ruby. (I think C# must have GC too.) Python's latest version has adopted lexical scoping. Java will probably be retrofitted with parametric polymorphism soon. Programs in these languages still don't look like FP programs, but you can hardly say there has been no effect, and with this trend, who knows what will happen in the next few years?
Ocaml seems to be making substantial inroads in the commercial community. I notice one of the regular posters to the caml-list now is a fairly well-known games programmer using Ocaml for his current project.
Scheme is being taught very successfully to high school students in the Rice PLT program.
While I'm rambling, I'll add one more thing: languages like Perl have something to add to the mix here, in terms of languages that work the way non-mathematical humans expect.
Not that old saw again. The principle of least astonishment? Hasn't every language user community cornered the market on this one by now? I seem to get conflicting reports. Here is my response:
Perl works the way Perl users expect it, because they only expect what it does.
You need to be surprised at least once to get extra expressive power from a language. This is pretty obvious if you think about it. If something doesn't surprise you, then you probably knew it already.
Probably people who had only ever used C and then learned about Perl/Python/Ruby were surprised the first time they learned that you could execute scripts interactively, that there is such a thing as garbage collection, what a regular expression is, etc. But I strongly suspect that the rest of Perl/Python/Ruby is just abbreviations for stuff they already knew.
BTW, since I'm on the subject, I think that the most significant difference between Perl, Python and Ruby is the syntax.
(I think I would like to retract the following two sentences.)
Do you want proof? The Parrot project is planning to use the same bytecode machine to run all of them. The reason there is so much mobility between the user communities of these languages is that the underlying semantics, divorced from the syntax, is basically the same.
|
|
Ehud Lamm - Re: The History of T
12/15/2001; 10:58:13 AM (reads: 5752, responses: 0)
|
|
I agree with you on almost all points, but I think my perspective is a bit different. I separate the issue of language expressiveness from the daya to day usability.
I think that notations are tools for thought. Designing good notations can be extremely difficult. Part of language design is syntax design, even though this part is less exciting than working on the semantics.
Most programmers want the whole package: useful syntax and "correct" semantics.
I agree that in principle the domain specific syntax could be built using some syntax extension menchanism (i.e., macros), just as domain specific semantics can come from library routines.
In practice, however, most libraries for commonly used languages don't come with syntax abstractions (mostly because language like C don't provide the required mechanisms), so programmers looking for the right syntax for a given job must learn an appropriate language.
Sad but true.
There's another side to this. Outside the scope of Lisp-like language. syntax extension tends to cause more harm than good. Different syntaxes don't compose well, and software readability in general deteriorate.
This, of course, speaks loudly for Lisp-like syntax. Ufortunately, most programmers - even educated ones - find it rather hard to use.
|
|
Ehud Lamm - Re: The History of T
12/15/2001; 11:03:54 AM (reads: 5739, responses: 0)
|
|
Java's verbosity can be the result of bad semantics and bad syntax, can it not?
|
|
Anton van Straaten - Re: The History of T
12/15/2001; 11:49:52 AM (reads: 5655, responses: 0)
|
|
Hey, thanks for pigeonholing me so quickly.
I was enjoying myself, don't take it personally! :) I put myself in the category of people who'd better be paid a whole lot to work amongst the mud huts, and I've often wondered whether I wouldn't have been happier in academia.
What I didn't make clear in my message, but should have, is that I agreed strongly with most of what you said - about e.g. built-in datatypes, solving problems once and for all, hacks breeding hacks, etc., because it mirrors my experience. Meta-things are more interesting than the things themselves, because once you've solved the meta-things, the things themselves become trivial instantiations. But in the commercial world, money is often made and jobs kept secure by solving the same problem over and over without meta-tools. It was my agreement with most of your points that led me to respond to the rest.
I'll respond to the rest of your message later...
|
|
Frank Atanassow - Re: The History of T
12/15/2001; 12:28:50 PM (reads: 5647, responses: 1)
|
|
What I didn't make clear in my message, but should have, is that I agreed strongly with most of what you said [..]
OK, glad to hear it, truly.
But in the commercial world, money is often made and jobs kept secure by solving the same problem over and over without meta-tools.
Which is part of the reason I got out of that place! Bryn put it pretty well. We are a bunch of money-mongering masochists:
People will subject themselves to all sorts of abuse from computers, and programmers have a much higher tolerance for this pain than normal folks. But I think we owe it to ourselves to improve our lot.
Amen, brother. :)
|
|
Ehud Lamm - Re: The History of T
12/15/2001; 12:56:18 PM (reads: 5704, responses: 0)
|
|
|
Dan Shappir - Re: The History of T
12/16/2001; 2:37:59 AM (reads: 5653, responses: 0)
|
|
One of the selling points MS is using in trying to push .NET over Java is multi-language support. The argument generally goes something like: "they have multi-platform support but we have multi-language support (and anyway we also have multi-platform support: Windows XP, Windows 2000, ... :-)
I've always had a bit of a problem with this multi-lang support issue. All the .NET langs that MS provides compile to the same byte-code, use the same data types, use the same library funcs, etc. Indeed, it has always seemed to me that all these languages are the same language (C#). Put in the context of this thread: they differ on syntax but have the same semantics
MS seems, therefore, to think that syntax is as important as semantics, at least from a marketting perspective. I disagree, and so apperently do many "professional" programmers: see the uproar about VB.NET. While preserving most of VB's syntax, it changed the semantics, resulting in a language that was just too different for the average Joe VB programmer.
The only scenarios where different langs on .NET seem to have any type of advantage over simply using C# for everything is when they break out of the .NET common-type mold, e.g., unmanaged C++, interpretted JavaScript. I'm still waiting to see how the .NET versions of Scheme, Haskell and Smalltalk come out. My bet, however, is that 3 years from now everybody using .NET will be using C#
|
|
Frank Atanassow - Daily programming and syntax composition
12/16/2001; 6:30:52 AM (reads: 5650, responses: 1)
|
|
[Ehud writes:] I agree with you on almost all points, but I think my perspective is a bit different. I separate the issue of language expressiveness from the daya to day usability.
Since I have probably already demonstrated that I am a hardliner on these issues, let me go the last mile and tentatively suggest a really heretical idea: maybe we shouldn't be using programming languages on a day-to-day basis at all! Maybe we should be sitting at a desk, thinking hard and planning out (using the semantics!) what we are going to do, and then, at the end of the week, when we are comfortable with our plans, translate all of that into the syntax of the programming language when we sit down in front of a keyboard.
Is this different from waterfall software engineering, or one of the other SE methodologies? Maybe not, but what I'm imagining now is a language with a formal semantics, and a programmer as someone who spends 90% of his time doing mathematics using that semantics, and only 10% coding. To me this is distinct from SE methodologies, where you are just mucking around with diagrams that have only a tenuous connection with any specific language's semantics.
There's another side to this. Outside the scope of Lisp-like language. syntax extension tends to cause more harm than good. Different syntaxes don't compose well, and software readability in general deteriorate.
"Different syntaxes don't compose well" popped out to me. At first, I thought I agreed strongly with this, but then I realized it's more accurate to say: "syntax recognizers (parsers) don't compose well." (Maybe you didn't mean that, though.) But even that isn't accurate: parser combinators compose well, by definition. Perhaps this can be exploited for syntax extensions? Here in my group, Doaitse Swierstra and Arthur Baars have been experimenting with using the UU_Parsing library to implement syntax macros.
This, of course, speaks loudly for Lisp-like syntax.
Does it? Again, I agreed with this at first, but a moment's consideration makes me doubt it. LISP-like syntax is just a subset of ML-like syntax, since I can always fully parenthesize an ML program. So you could have a LISP-like macro system in language with ML-like syntax, if you are willing to put up with that. The problem is how to extend that sort of system to allow not just LL(1) grammars, but LR(1) (or at least LALR) grammars, but that does not obviate my point.
Frankly, I think the limited syntax extensions possible in ML and Haskell are too ad hoc. For example, because he must assign a numeric precedence and fixity to each infix operator, the designer of an Haskell module either has to assume that no one will mix his operators with the operators defined in other modules, or he has to carefully pick the precedences and fixities to work well with every other possible module, which is impossible. So, user-defined syntax in ML-like languages sort of requires global knowledge, which is clearly bad. You need a way of defining composable parsers to get around the closed-world assumption.
|
|
Frank Atanassow - Embedding Perl regexps
12/16/2001; 6:47:28 AM (reads: 5681, responses: 0)
|
|
[Pixel wrote:]
If you're crazy about Perl regexps, use the Perl regexp library.
Alas this is not so simple. In many languages, "\d" and "d" are the same. The result is:
- either you write "\\d", loosing the readability of regexps (which can be hard to read already)
- either you preprocess your language
There are two other options.
- You can map the backslash to a non-backslash character, dynamically.
- You can represent a regular expression as an abstract datatype, and use constructors or combinators to build it up as an abstract syntax tree.
IIRC, the latter is what Olin Shivers did for his Scheme regexp package. It has the advantages of making regular expressions more readable (though, of course, significantly longer), and disallowing at least some of ill-formed regular expressions (forgetting a closing parenthesis, for example)---though, since Perl's regexps have a lot of strange extensions, I'm not sure if it's feasible to try to eliminate all the possibilities for ill-formedness.
A similar idea has been used by others to handle printf format strings in an abstract, and even extensible, way in typed languages.
|
|
Fredrik Lundh - Re: The History of T
12/16/2001; 6:53:54 AM (reads: 5621, responses: 1)
|
|
> I really do want to have objective, civilized discussions
Sorry Frank, but I don't think I can have a civilized discussion with someone who cannot refrain from showing off his academic arrogance in every single post (referring to people as bozos, whining about your ex-colleagues not having the right academic degrees, that old skyscraper and "we, the academics, have a God-given right to control people" nonsense, etc).
So I'll stay out of this discussion; shouldn't have entered in the first place.
But before I leave, just let me mention that for me, being able and allowed to read and write computer programs is a basic human right. To quote Bob Frankston, "Just as there weren't going to be enough phone operators, there aren't enough programmers to add all the little bits of intelligent behavior we are going to expect of the infrastructure.". Our society cannot afford to keep programming away from the masses. It's all about empowering people. Every single one of them.
Over and out /F
|
|
Ehud Lamm - Re: The History of T
12/16/2001; 8:15:09 AM (reads: 5698, responses: 0)
|
|
People, people, let's try to keep this professional. Fredrik, I want to hear you views. I also learn from Frank. I have my own views, of course, too.
I don't want LtU to become a second comp.lang.misc. We want informed opinions, and this site is based on linking interesting work, esp. research work. But let's try not to hurt each other's feelings. This doesn't promote fruitful discussion.
|
|
pixel - Re: The History of T
12/16/2001; 8:38:32 AM (reads: 5608, responses: 0)
|
|
oh my! Anyone guessed why perl's regexp were so much liked?
Well wonder why i prefer <tt>"c[ad]+r"</tt> over <tt>(: "c" (+ ("ad")) "r")</tt> !
(example from Olin Shivers)
Even if people are not very used to BNF grammars, having "+" and "*" prefix
does not ease understanding.
And don't get me wrong, i like static checking, but not the unneeded syntax
cost. I really like Caml's printf with has static checks, together with
expressivity and smooth transition from C (try to i18n without printf and
you'll see!)
* You can map the backslash to a non-backslash character, dynamically.
uh? i don't grasp this. What i meant is:
guile> "\d"
"d"
implying you can't use the perl's syntax for regexps.
Now i'm not saying perl's regexps are the ultimate choice. But they have
proven their expressivity and power.
|
|
Frank Atanassow - Inalienable rights?
12/16/2001; 8:42:33 AM (reads: 5635, responses: 1)
|
|
But before I leave, just let me mention that for me, being able and allowed to read and write computer programs is a basic human right.
Oh yeah, "life, liberty and the pursuit of computer programming". How could I have forgotten that one?
Besides, certainly no one is planning to take away your freedom to read or write programs.
To quote Bob Frankston, "Just as there weren't going to be enough phone operators, there aren't enough programmers to add all the little bits of intelligent behavior we are going to expect of the infrastructure.".
Didn't Frederick Brooks already put to rest the idea that throwing programmers at burgeoning software problems is not a viable solution?
Our society cannot afford to keep programming away from the masses. It's all about empowering people. Every single one of them.
I guess if you believe this, I can see why you react so strongly to what I've said. I see zero connection between political empowerment and programming language design. For me it is about technical empowerment, the ability to write better programs more quickly and easily, not about turning everybody on earth into a programmer. (I should say, a "programming professional," by my earlier definition.)
I do want to increase the transfer of information from academia to industry (and vice versa, but admittedly it's a fairly one-sided proposition for me); that's part of why I post this stuff here, and part of why I teach.
On the other hand, if something has been in the literature for 40 years or so, I don't want to spend my time explaining that, when people are perfectly capable of picking up a book and reading a much better presentation written by someone who was completely focused on that topic.
What is needed is probably to motivate programmers better. Many programmers don't see any value in computer science, because they never see it applied. Peer pressure is also a factor, but I'm a believer in The-Right-Thing winning out, not Worse-Is-Better. :)
|
|
Frank Atanassow - Re: The History of T
12/16/2001; 8:55:18 AM (reads: 5623, responses: 0)
|
|
Well wonder why i prefer <tt>"c[ad]+r"</tt> over <tt>(: "c" (+ ("ad")) "r")</tt> ! (example from Olin
Shivers) Even if people are not very used to BNF grammars, having "+" and "*" prefix does not ease
understanding.
To reply to your second remark first, if you are already a Scheme programmer, you are not going to have trouble with prefix syntax.
As for brevity, if your program is a big one, the extra verbosity is not going to matter much. If your big program has lots and lots of regexps, than the readability will certainly be improved. Also, you will be able to name and combine regexps, which is a win. Only if your program is really small will the sexp form increase the size of it by a non-negligible percentage.
You can map the backslash to a non-backslash character, dynamically.
uh? i don't grasp this.
Pick a character like, say, %, and write things like "^%(%s%)$" instead of "^\\(\\s\\)$". Your interface to the regexp library hides the functions from the regexp library, takes strings of the first form, maps % to a backslash, and passes it to the regexp library functions.
|
|
Ehud Lamm - Re: Inalienable rights?
12/16/2001; 9:29:10 AM (reads: 5679, responses: 0)
|
|
Oh yeah, "life, liberty and the pursuit of computer programming". How could I have forgotten that one?
Snide remarks are really not helpful.
Though I have an academic bent, I think many (even in academia) have similar views. We may not all be interested in them, but they are worth discussing.
For example, we had quite a few discussions on end-user programming. Python fans have CP4E.
Obviously the issues here are not the venerable concepts of syntax and semantics, but rather more cognitive science issues. See work on PBD, for examples.
Indeed, one should first ask whether end-user programming, or CP4E (as a goal) are of interest. And each is entitled to his own views (I am with Fredrik on this one..)
But from a PL research point of view, I think these questions are well worth our research efforts.
It is simply wrong to dismiss the cognitive aspects of language. One may find these less interesting than formal models, or not, but they exist none the less.
|
|
Frank Atanassow - Re: The History of T
12/16/2001; 9:34:00 AM (reads: 5586, responses: 0)
|
|
Oh wait, let me see if I can't entice you into a civilized discussion anyway.
referring to people as bozos,
(BTW, you may not realize it, since I suspect you are not a native English speaker, but "bozo" is not a very harsh insult.)
Do you deny that there are many programmers working in industry who overvalue their expertise? I see stories about prima donna programmers all the time. (I, uh, cough, probably fit that description myself when I was industry, but I no longer consider myself an exceptional programmer.)
It is interesting that you call me arrogant, because I detect an unusually large amount of arrogance in the programming community. You just have to read Slashdot to encounter this phenomenon. There is a huge amount of peer pressure.
I became infected with this myself, and now I find it hinders me; as a scientist, I have to actually back up my claims with proof, but I often find myself sliding into faux-hacker styles of speech which obscure the truth. For example, my supervisor once caught me out when I said, "The solution to this problem is of course...", where I should have said "*A* solution to this..."
Programmer culture seems to be suffused with this sort of thing. I see all sorts of loaded phrases bandied about and accepted with little criticism. For example, the phrase "Information wants to be free": it's totally meaningless, but people use it all the time as if it warranted their opinions. What gets communicated by that phrase is not a fact about the nature of information, but rather a political opinion---one I agree with, by the way, but one that I think is presented dishonestly in this manner.
I once thought about writing a paper on this phenomenon, because I got so sick of it. The idea was to compare hacker culture with the American myth of the rugged pioneer, or the lone gunfighter.
whining about your ex-colleagues not having the right academic degrees,
It was not a complaint, it was an earnest expression of surprise. When I was in college, before I became employed, I really thought that to get a programming job you had to have a degree in computer science, just like you need an engineering degree to get a job as an engineer, and that the stories you hear about whiz kids making it big in industry because they spent their childhood programming at home was just something the media exaggerated to make stories interesting. Well, I think those are exaggerations, but it is not an exaggeration to say that in today's world anybody with a modicum of computer expertise can get a job as an entry-level programmer.
that old skyscraper
How would you approach the skyscraper problem? Or do you not see any problem there?
and "we, the academics, have a God-given right to control people" nonsense, etc
Now you're putting words in my mouth.
|
|
Frank Atanassow - Politics
12/16/2001; 10:03:48 AM (reads: 5565, responses: 1)
|
|
Though I have an academic bent, I think many (even in academia) have similar views. We may not all be interested in them, but they are worth discussing.
I also hold political views about programming, but insisting that "the right to program" is an inalienable right is just too absurd for me. There are a thousand things I would put first: life, liberty, the pursuit of happiness, food, access to health care, housing, companionship, etc. To jump straight to something so specific and narrow suggests a grievous misperception of what 50% or more of the world really needs and wants, e.g., food.
As for CP4E, I am all in favor of it! I'm just not in favor of teaching people Python. I already mentioned the Rice PLT courses in Scheme; I think they are fantastic, and I would love to see a similar sort of effort with an ML-like language.
|
|
Ehud Lamm - Re: Politics
12/16/2001; 10:26:36 AM (reads: 5608, responses: 0)
|
|
I wouldn't use Python for this goal myself. And, of course, I agree that there are many many things that should come before programming.
I was thinking the other day, which language would I choose these days, since I like to revisit this question every couply of months. I really don't like the ML environments that I saw, but I know I must look at Ocaml one of these days. However, ML functors are really cool...
Haskell is pretty, but I think lazyness is problematic. So are monads.
So should we stick with Scheme?
My SE students complain that knowing Ada isn't going to land them a job. Now convince them to learn ML...
|
|
Frank Atanassow - Re: The History of T
12/16/2001; 10:34:26 AM (reads: 5554, responses: 0)
|
|
I also think laziness is problematic, though Haskell is my favorite programming language.
Monads are great, though their treatment in Haskell is marred by complications in the type class system. What is it you don't like about monads?
Scheme is a pretty good choice, I think. There is something to be said for teaching an untyped language to beginners first, not least because it lets them experience firsthand the problems which typed languages address (and don't address, though of course I think static typing is a big win overall).
|
|
Ehud Lamm - Re: Daily programming and syntax composition
12/16/2001; 10:35:41 AM (reads: 5628, responses: 0)
|
|
Maybe we should be sitting at a desk, thinking hard and planning out (using the semantics!) what we are going to do, and then, at the end of the week, when we are comfortable with our plans, translate all of that into the syntax of the programming language when we sit down in front of a keyboard.
It is not so much that I am against this in principle, I am just sure this isn't going to work (at least not in general). Complexity is going to come and bite you in the a.. Testing and playing with a working system really helps, in my experience.
I wasn't thinking about composing parsers, though it is part of the problem. I was thinking in terms of defining coorect semantics, and in terms of readability. When you think about constructs that cross experssion boundaries (things like new block strcutres etc.) and constructs that allow nesting, things can get pretty subtle. That's why they are usualyl left to language desginers (who may be using syntax extension mechanisms, of course, but who are much more sophisticated in dealing with such issues).
You need a way of defining composable parsers to get around the closed-world assumption
And composable syntaxes. So do you have any suggestions for tackling this?
|
|
Frank Atanassow - Re: The History of T
12/16/2001; 10:49:50 AM (reads: 5557, responses: 1)
|
|
It is not so much that I am against this in principle, I am just sure this isn't going to work (at least not in general). Complexity is going to come and bite you in the a.. Testing and playing with a working system really helps, in my experience.
There is that. Perhaps what I am trying to say is that we just need to spend less time coding, take a step back, and spend more time thinking about what we need to code, what we don't, and why or why not. There is something unnatural about spending the entire day in front of your computer, and nature is showing many of us nowadays (via wrist, arm, back and eye problems) that we just aren't built for it.
Well, nothing too profound there, I guess, and the part about nature will improve as user interfaces improve, and ubiquitous computing gains momentum.
And composable syntaxes.
Well, composable parsers induce composable syntaxes, don't they? Otherwise, I don't understand.
So do you have any suggestions for tackling this?
Not at the moment. I still have to experiment more with Doaitse's UU parsers and his attribute grammar system. But my experience with them is that, once you write the grammar, it is pretty easy to see what it recognizes, and it ensures that you can't write ambiguous grammars. I suspect the upshot is that the parsers encourage writing grammars which are fairly easy to reason about, for reasons analagous to how referential transparency makes it easy to reason about functional programs in general. It's a relative thing, though; the UU parsers still take some getting used to.
|
|
Ehud Lamm - Re: The History of T
12/16/2001; 12:07:09 PM (reads: 5601, responses: 0)
|
|
Any good links about UU parsers? I'd like to see more.
|
|
Anton van Straaten - Re: The History of T
12/16/2001; 12:52:20 PM (reads: 5549, responses: 0)
|
|
On the other hand, if syntax is largely irrelevant, then
factoring it out of an application, allowing it to vary, and making it easy
to change makes sense. That current tools don't support this is a problem,
admittedly, and is one of the things that would need to change to make that
feasible.
Agreed.
But it's only been 40-odd years since the first "high
level" language was invented, and there's plenty of stuff that
still has to trickle. Incontrovertible proof of this can be found in the
fact that the entire concept of functional programming has virtually
*zero* presence in the commercial world, some few special-case
exceptions like Erlang notwithstanding.
A couple of years ago I would have agreed with this, but not
anymore. Several archetypal features of functional programming are becoming
more and more widespread. Both garbage collection and closures (or some
semblance thereof) are available in Java, Perl, Python and Ruby. (I think C#
must have GC too.) Python's latest version has adopted lexical scoping. Java
will probably be retrofitted with parametric polymorphism soon. Programs in
these languages still don't look like FP programs, but you can hardly say
there has been no effect, and with this trend, who knows what will happen in
the next few years?
Ah, but you have to look at how the features you mention got to where
they are now. Garbage collection and closures were in Smalltalk, but it
pretty much adopted those both from Scheme/Lisp. But it was Smalltalk that
had the greatest influence on the mainstream languages, because the mainstream
picked up on the idea of object-orientation, rather than functional programming,
which was still in its infancy at the time. Smalltalk had these features
in the early seventies - Java was only released in 1995 or so. So we have
a twenty-odd year trickle down for garbage collection. Java still doesn't
have proper closures, although if you're a masochist you can pretend that inner
classes are closures if the variable scoping happens to work in your favor for
what you want to do. The other languages that really do support closures
are all the "lightweight languages" that tend to be fairly nimble
about adopting new features. Of course, that was the point of LL1; these
languages have a chance of becoming a model for the adoption of advanced
academic features, which other languages will be forced to follow. So I
think that this trend, combined with "Internet time", may increase the
speed of propagation in language features in future; but there's no obvious
example of that having happened to a great degree yet. All of the features
you mentioned, that are actually in a major language today, have taken over
twenty years to reach those languages.
Ocaml seems to be making substantial inroads in the commercial
community. I notice one of the regular posters to the caml-list now is a
fairly well-known games programmer using Ocaml for his current project.
Scheme is being taught very successfully to high school students
in the Rice PLT program.
These are all positive things, but if you ask the question what percentage of
commercial programming is done in functional languages, then my characterization
of "virtually *zero*" is accurate.
While I'm rambling, I'll add one more thing: languages like
Perl have something to add to the mix here, in terms of languages that
work the way non-mathematical humans expect.
Not that old saw again. The principle of least astonishment?
Hasn't every language user community cornered the market on this one by
now?
Actually, I didn't mean least astonishment, although that's how I said
it. What I mean is this: most humans do not think mathematically. At
all. In fact, mathematics is an incredibly foreign discipline, to most
humans. Those of us who can think mathematically have effectively trained
our messy neural networks like a topiary bush - all neatly arranged, trimmed,
and coaxed into particular shapes. But most people simply have a tangle of
brambles, and that tangle seems to be more comfortable dealing with languages
like Perl than languages like ML. Not because of the principle of least
surprise, but perhaps because of the principle of least exposed
mathematics. It reminds of Stephen Hawking's publisher's advice about how
the inclusion of formulae in his book would drive down sales. ML doesn't
"sell" to the general programming public because it has "too many
formulae": as simple as that. I predict this will never
change.
Before you react to the above claim, read this: we're talking about two
vastly different communities here. On the one hand, the subject of
professional engineers has been raised. Professional engineers should
certainly be trained, know BNF, and a boatload of other stuff that most software
engineers today do not know. Many of these future engineers may very well
use and enjoy functional languages. I'm sure that'll start happening as
soon as universities figure out what they should be teaching such engineers;
right now, it's a hodge-podge, and whether software engineering even exists at
the moment, as an academic discipline taught to students, is debatable.
But still, trained engineers is a worthy goal, and ours is a complex discipline,
so it's going to take some time, and we should expect that.
Then there's the other community: the self-trained or minimally trained
community of people who just want to get a job done, or just enjoy playing
around with software, and who may not even be aware of the vast storehouse of
academic information out there (perhaps they were put off by academics when
someone called them a "bozo" ;) This community will always
exist, and will always be bigger than the community of trained engineers.
The problem is this: today, the two communities I've described overlap to
such a degree that they are barely distinguishable, and there are many more of
the untrained type than otherwise. Examining academic credentials isn't
good enough: I know people with bachelor's and master's degrees in computer
science who, while certainly better than average, and good programmers, don't
live up to the standards which I think "software engineers"
should. This isn't their fault, but rather a function of the fact that as
I mentioned, software engineering as a discipline is hardly being taught,
yet.
So until a vast corps of competent and well-trained software engineers arises
(and I'm not holding my breath), what those of us out here in the trenches have
to work with is people who do not think mathematically, for whom social and
economic considerations are far more important when choosing a language than
technical considerations, etc.
It is these people that I'm mainly thinking of (although not only them), when
I say that there's a human side of programming languages which is currently
under-addressed, that I think languages like Perl (and I'm just picking Perl as
a canonical example) have perhaps begun to address, in a perhaps limited and
precursory way. Languages that make it easy to deal with basic data types
like text, for example, interspersing and embedding it in a program, and
building it into the language's syntax. Some functional languages have a
list comprehension feature; Perl has a limited sort of text comprehension.
I'm not being very concrete, partly because I'm trying to identify something
that is currently missing, and I don't know its exact shape. But a good
start would be better support for complex datatypes like text, better support
for integration of multiple languages - including natural language - in the same
"source" file, better support for polymorphism of operators on types,
etc. etc. You can find some of these features scattered around in various
languages, but they don't exist in any single accessible language today.
BTW, since I'm on the subject, I think that the most significant
difference between Perl, Python and Ruby is the syntax. (I
think I would like to retract the following two sentences.) Do you want
proof? The Parrot project is planning to use the same bytecode machine to
run all of them. The reason there is so much mobility between the user
communities of these languages is that the underlying semantics, divorced
from the syntax, is basically the same.
I think what you're describing is what will make us both happy: mainstream
languages will get increasingly sophisticated cores with a great deal in common:
runtimes like JVM, .NET, Parrot etc. These will become more
functionally-oriented over time (because, after all, lambda is the
ultimate!) On top of these cores, a variety of languages will continue to
bloom and cross-pollinate, some of them oriented towards the inexperienced, and
others more rigorous. There can still be semantic differences between
these languages: you only have to look at all the languages implemented on top
of the JVM for proof of this. However, they are more likely to be able to
interoperate with other, semantically different languages, if they share a
common, high-level core, as opposed to just being compiled to the same machine
code.
|
|
Frank Atanassow - Re: The History of T
12/16/2001; 12:52:46 PM (reads: 5537, responses: 0)
|
|
|
Frank Atanassow - Smalltalk, learnability, two communities
12/16/2001; 3:06:49 PM (reads: 5542, responses: 0)
|
|
But it was Smalltalk that had the greatest influence on the
mainstream languages,
Eh? Despite the facts that both C and C++ have neither garbage collection nor
closures, Stroustrup claims Simula was his greatest influence, and Java was
principally designed by a well-known LISPer?
Smalltalk has had a big influence on object-oriented languages in research
circles, I will surely grant you, but I don't see much of an effect in
industry.
because the mainstream picked up on the idea of object-orientation, rather than
functional programming, which was still in its infancy at the time.
The mainstream picked up on OO. No argument.
But I think FP was more mature than OO was. LISP was born in 1959. (In fact, as a topic of formal research, OO is still far less mature.)
Those of us who can think mathematically have effectively trained our messy
neural networks like a topiary bush - all neatly arranged, trimmed, and coaxed
into particular shapes. But most people simply have a tangle of brambles, and
that tangle seems to be more comfortable dealing with languages like Perl than
languages like ML.
I still don't buy this argument, and I will give you not one, but six reasons for my skepticism.
First, we are not born thinking in Perl anymore than we are born thinking in
ML. I really do not buy this argument that Perl is somehow magically better
suited to non-mathematicians than ML is. I think the converse is true
(mathematically-trained people will have an easier time with ML), but the
average person probably has at least as much difficulty grasping assignment,
imperative variables and for-loops as he has grasping lets and recursion. They
are both initially alien, and this is why your argument re: Stephen Hawking
does not hold water. (I teach Java, remember? I haven't yet taught FP to a
group of non-programmers so I can't compare, but I know that imperative
features and iteration are a real sticking point for my Java students, none of
whom are CS majors, BTW.)
Second, maybe you mean that the oft-touted similarity of Perl syntax to
"natural language" syntax, and not imperative programming in general, makes it
easier for people to learn it. My responses are: a) I have yet to see any
objective evidence of this from, say, a linguist who actually conducted a
scientific study on the matter. All I ever hear are vague, unconvincing claims about context-sensitivity. b) A virtue of natural language is that it is ambiguous. But programmers, even beginning programmers, probably do not want their programs to interpret their instructions ambiguously. It seems this would
only make it harder to produce working programs.
Third, comparing ML to Perl is like comparing a grand-size T-bone steak to a
quarter pounder at McDonald's. Of course it will take you longer to eat the
first: there is more to eat! ML is simply more sophisticated than Perl. There
is more information in an ML program than there is in an equivalent Perl
program. There are more structures to learn, because there is more power to be
had. There are more concepts to learn, because more is possible. If you
restricted an ML-like language to some subset which was roughly equivalent to
Perl, and you added to it a library of functions which equate with what Perl
has built-in, I am fairly confident that it would be easier to teach and use
than Perl.
Fourth, you present mathematics as though it were some forbidden mystery which
is only whispered in low tones at secret meetings of the illuminati. Of course
that is far from the truth. Everybody who graduates high school has learned
arithmetic, algebra, some functional analysis and probably some differential and
integral calculus. In college, they may learn more. The ideas which ML is based
on, are very closed related to the ones they learn in algebra. Even the syntax
is in many cases similar (modulo the limitations of ASCII). Perl, I think, is
the more alien for non-programmers.
For experienced programmers, OTOH, I agree Perl is far more familiar.
Fifth, the success of the TeachScheme!
Project (which I incorrectly referred to as the "Rice/PLT Scheme teaching
project" more than once above) suggests that beginners can learn FP,
and from what I've seen, they get more out of it than they do being taught C++
or Pascal. (And they have the standardized test scores to prove it.) That
doesn't say anything against Perl but, thankfully, Perl is not being taught in
American schools yet. :)
Sixth, just on the merits of the language, FP must be easier to teach because
you can formalize exactly what the reduction rules are, show how the program
changes during evaluation, etc. Perl, OTOH, is an implementation-dependent
mess, full of poorly specified, ad hoc, irregular behavior. I just can't
believe that Perl would be the more easily grasped.
Before you react to the above claim, read this: we're talking about two vastly
different communities here. On the one hand, the subject of professional
engineers has been raised. [..] Then there's the other community: the
self-trained or minimally trained community of people who just want to get a
job done, or just enjoy playing around with software, and who may not even be
aware of the vast storehouse of academic information out there (perhaps they
were put off by academics when someone called them a "bozo" ;)
OK, OK, I retract "bozo".
To all the bozos who were offended by my remark: I apologize. Anton is right:
you aren't bozos, you're hobbyists. ;)
Seriously though, your model appeals to me.
The problem is this: today, the two communities I've described overlap to such
a degree that they are barely distinguishable, and there are many more of the
untrained type than otherwise.
Agreed, I think.
It is these people that I'm mainly thinking of (although not only them), when I
say that there's a human side of programming languages which is currently
under-addressed, that I think languages like Perl (and I'm just picking Perl as
a canonical example) have perhaps begun to address, in a perhaps limited and
precursory way.
Hm, that is an interesting perspective, which evokes some sympathy in
me. However, I am not sure I understand the consequences of this.
So are you saying it is an apples-and-oranges thing, and I am picking fights
with the wrong people? Or are you saying that academics should dumb down their
research to make it more accessible? Or are you saying perhaps that for each
"abstract" programming language, there should be one elegant, streamlined,
general version for the engineers, and one redundant, sugarified, specialized
version for the hobbyists ("programmers?" what is a good name?)---sort of a
domain-specific language, if you will.
BTW, as a postscript, let me add that the reason I take liberties with my remarks about "bozos" on this weblog, is that I feel confident that there are none here who fit that description. (No, not even you, Fredrik.)
|
|
Anton van Straaten - Re: The History of T
12/16/2001; 11:04:30 PM (reads: 5524, responses: 0)
|
|
Fourth, you present mathematics as though it were some forbidden mystery which is only whispered in low tones at secret meetings of the illuminati.
Aha - therein lies the rub! Perhaps you've come across references to "Matlab de Illuminati", a Latin phrase referring to the copy of Matlab used by the Illuminati, to model their secret world-controlling activities? You may be surprised to find that "Matlab de Illuminati" is an exact anagram for "Lambda Ultimate in IL", a clear reference to Ehud's weblog, right down to the country code of his email address!!! Faced with such evidence, do you still foolishly cling to the belief that mere everyday bozos will ever be allowed to learn real mathematics? Why do you think most people forget anything but the most basic algebra when they leave high school? It's the mind rays, of course, beamed from orbiting platforms sent up by the CIA, which, of course, is under Illuminati control!
My more serious response will have to fnord wait until "tomorrow", defined as a function of some timezone in which it isn't already tomorrow right now...
|
|
Chris Rathman - Re: The History of T
12/17/2001; 7:54:57 AM (reads: 5501, responses: 1)
|
|
Stroustrup claims Simula was his greatest influence, and Java was principally designed by a well-known LISPer? Smalltalk has had a big influence on object-oriented languages in research circles, I will surely grant you, but I don't see much of an effect in industry. Stroustrup seems to always gone out of his way to downplay Smalltalk as an influence. Be that as it may, Smalltalk has had a tremendous influence on the industry, as did all those skunkwork projects of Xerox Parc. The problem was that Apple & Microsoft decided to ignore the programming language and use C, C++ & Objective-C to build their GUI's. Out of these, only Objective-C sticks with the ST messaging model.
I do think it fairly safe to say that Smalltalk was very influential in the development of user interfaces, but it's effect on programming language design was not nearly as dramatic.
|
|
Ehud Lamm - Re: The History of T
12/17/2001; 9:43:25 AM (reads: 5546, responses: 0)
|
|
I do think it fairly safe to say that Smalltalk was very influential in the development of user interfaces, but it's effect on programming language design was not nearly as dramatic.
This is kind of fair given the interests of the Smalltalk developers (esp. Kay).
|
|
Fredrik Lundh - Re: The History of T
12/17/2001; 11:20:50 AM (reads: 5506, responses: 1)
|
|
To jump straight to something so specific and narrow suggests a grievous misperception of what 50% or more of the world really needs and wants, e.g., food.
Wow.
Frank, how about a simple experiment?
For the next few days, assume that every single person you talk to is at least as smart and rational as you are. Don't limit yourself to this site; apply this to everyone you meet.
If they say something you don't agree with, try to figure out what they know that you don't. If they say something that sounds ludicrous, try to figure out if there's another way to interpret what they said that makes more sense. If they appear to act irrationally, try to figure out what piece of information they have access to that you don't.
(and with "figure out", I mean "think, don't ask")
When you've tried this technique on other bozos for a few days, I'm pretty sure you can figure out what I was talking about all by yourself.
|
|
Ehud Lamm - Re: The History of T
12/17/2001; 12:30:23 PM (reads: 5541, responses: 0)
|
|
This is a professional forum. If any LtU reader wants to insult others, please use other forums or personal email.
I spent a lot of time trying to run this site for the benefit of all of us. Please don't ruin it.
|
|
Frank Atanassow - Re: The History of T
12/17/2001; 1:25:16 PM (reads: 5477, responses: 1)
|
|
My dear Fredrik,
I have gotten a lot more out of Anton's posts than yours. About all I get from you is that you feel the same contempt for me that you claim I hold for you.
Ironic, eh?
|
|
Ehud Lamm - Re: The History of T
12/17/2001; 1:30:00 PM (reads: 5517, responses: 0)
|
|
Can we put this to rest?
|
|
Frank Atanassow - Re: The History of T
12/17/2001; 3:17:03 PM (reads: 5462, responses: 0)
|
|
OK, I consent to end the thing with Fredrik, but I would still like to hear Anton's (and others') response to my last post.
Let me just add the standard disclaimer, then.
It seems that I have ended up dominating this discussion, and, unfortunately because of this whole academia vs. hackers thing, it may appear that I am somehow representing the opinions of PL researchers as a whole. This is certainly false, not least because I am still a novice at research work myself. Ehud is here, and I know there are some other people lurking who evidently possess more grace than I.
In short, I am definitely a hardliner, and my people skills are lacking. I apologize for ridiculing the opinions of other readers, and invite you to give me another chance in other discussions.
The End/Ende/Eind/Fini/Kan
(Still, nothing like a good flamewar to get your blood flowing though, eh? ;)
|
|
Anton van Straaten - Re: The History of T
12/17/2001; 9:12:56 PM (reads: 5474, responses: 1)
|
|
OK, this is going to be long - brevity would take more time. I hope the weblog can take
it...
Smalltalk has had a big influence on object-oriented
languages in research circles, I will surely grant you, but I don't see much
of an effect in industry.
because the mainstream picked up on the idea of
object-orientation, rather than functional programming, which was still
in its infancy at the time.
The mainstream picked up on OO. No argument.
Smalltalk had a big influence on many niche commercial
languages prior to Java's debut. C++ is a sort of system-level special
case, I think. But anyway, let's just agree on the OO part. The
original point I was trying to make is that (a) the presence of garbage collection doesn't really represent a "functional" feature that's been transferred
into the mainstream, regardless of where garbage collection originated; and (b) even closures,
which are more purely functional, were mediated through things like Smalltalk's code blocks,
and were not typically seen as a fundamental language feature, but rather
as a possibly useful adjunct. A direct mainstream adoption of
functional language features hasn't yet happened - the only example I'm aware of,
continuations in Stackless Python, has yet to be adopted in Python
proper.
Another factor impeding this cross-fertilization is
simply that full appreciation of functional concepts can require quite a leap
for imperative programmers. (Perhaps as much of a leap as the concept of
adding sugar to a programming language requires for a die-hard academic!
;) One can look at functional features out of context and come to the
conclusion that they're interesting but not particularly useful or important, if
one is thinking imperatively.
But I think FP was more mature than OO was. LISP was born in
1959. (In fact, as a topic of formal research, OO is still far less mature.)
Well, in a sense FP as a pure academic discipline existed at least from the
time that Church invented the lambda calculus. But Lisp didn't make a very
good functional language in the early days - there were too many unresolved
problems, and the lack of lexical scoping didn't help. It wasn't until
around the time that Scheme appeared in the mid-70s that the functional aspects
began being fully explored, afaik. The statically-typed and
type-inferencing languages like ML and Haskell didn't show up until the
'80s. Contrast this to the earliest Smalltalk version from 1969. So
really, FP as a programming discipline was embryonic at best, at the time that
object orientation began spreading.
I still don't buy this argument, and I will give you not one,
but six reasons for my skepticism.
I should probably stop using metaphors and explain myself better. I'm
not trying to make any grand claims for Perl. I don't think we're born
thinking in it, or that it resembles natural language better. All I'm
talking about are a collection of little practical things that it offers, which
allow people to do the things that they need to do with it. (Examples will
follow.) Now, in theory, you can provide some of these features in
libraries (but not all). Still, that doesn't rise to the level of
convenience that it can when an operation is syntactically embedded in the
language.
OK, example #1. This may not be the best example of what I'm talking
about, but it relates to another point you raised about average people grasping
"let". I agree that let as a concept is not a problem, but I'd
like to examine the practice of it. In Scheme, for example, you have let,
let*, and letrec, all of which have very specific purposes. The
static-typed functional languages have typically inherited these in some form,
because the semantics are pretty basic to functional style and lambda
calculus.
But what does this mean to an average programmer? I've seen functional
advocates complaining about C++ allowing variables to be defined anywhere in a
block, for example. But why shouldn't you be allowed to do that?
What does it matter, to the programmer, if the compiler writer has to jump
through hoops to convert everything to an essentially functional SSA form before
processing it further? You and I might like the idea of being explicit
about our scope blocks - I like to do to a sort of manual static analysis and
refactoring based on that. But the "benefit" here is
undetectable to the average programmer, and a language like Scheme - one of my
favorites! - which requires you to nest scope blocks and choose between three
different types of local variable definition is fighting a losing battle in
terms of popular acceptance (PLT's efforts notwithstanding). Now, I'm sure
you can point to features in any language that may seem pointless to the user;
but in the user-oriented languages, those features are typically there because
someone wanted them, not because they're dictated by an obscure branch of
mathematics that has a really scary name (i.e. anything with both a greek letter
and the word "calculus" in it).
Example #2+: the ability to easily interpolate values
within strings, to embed blocks of text within a program (like HTML or SQL),
to open, loop through and read a file with hardly any code, etc. - all
ordinary, boring things that people do every day, and for which shortcuts are helpful. These
shortcuts are often syntactical, so can't just be emulated with libraries.
For more and better
examples, I would have to start scouring through code for examples.
Fourth, you present mathematics as though it were some forbidden
mystery which is only whispered in low tones at secret meetings of the
illuminati. Of course that is far from the truth. Everybody who graduates
high school has learned arithmetic, algebra, some functional analysis and
probably some differential and integral calculus.
I'm afraid we must be living on different planets, and somehow communicating
through a warp in spacetime (no doubt created when the first fully-fledged
quantum computer goes online in 2012...) On the three continents I've
lived on (granted, one of them was Africa), most high school graduates don't
absorb very much beyond the most basic algebra. Ask one of them whether
they learned calculus, and you're likely to get an answer like "I think we
might have..."
You dismissed my Stephen Hawking analogy, but considering that the market for
Hawking's book was high school graduate and above, why do you think the presence
of formulae was considered taboo?
(I teach Java, remember? I haven't yet taught FP to a group of
non-programmers so I can't compare, but I know that imperative features and
iteration are a real sticking point for my Java students, none of whom are
CS majors, BTW.)
Java isn't as lightweight as some of these
languages. I'm using "lightweight" in the same sense as the LL1
conference, I believe: basically, light semantic weight, which makes them easy
to learn. Features like static typing and the need to package everything
as a class (and each class in a separate file) create barriers to instant
gratification, and to developing by casual accretion. There's a reason
that Perl, Python, Ruby, Tcl, and Javascript (1.x) are all dynamically typed:
it's a big part of what makes them lightweight. People don't want to have to think
things out in advance and figure out explicitly what type of
object they're dealing with, they just want to get something done
(type inferencing might help, but probably not enough, at least not in current systems). While
programming this way may make you shudder, it's what many people
really do. Certainly, no-one's successfully developing huge and complex systems this way, but
you might be surprised to see what they are developing. A good
example might be Slash, the Slashdot board software - it's not a completely trivial application,
and it was all put together with Perl, and by all accounts, not
very prettily, either. In the business world, many applications have limited scope, so
it doesn't matter if what's developed doesn't solve any
other problems or work the same way as a similar program in another department,
or whatever. (Of course, it can get to the point where it does matter,
which is when the consultants with MBAs and PhDs get called in, and the
money flows like water...)
Fifth, the success of the TeachScheme!
Project (which I incorrectly referred to as the "Rice/PLT Scheme
teaching project" more than once above) suggests that beginners
can learn FP, and from
what I've seen, they get more out of it than they do being taught C++ or
Pascal. (And they have the standardized test scores to prove it.)
Scheme is perhaps the most "lightweight" of the functional
languages, so this doesn't run completely counter to what I'm saying, although
Scheme does lack much of the sugar I'm talking about. We'll have to wait
and see how these students apply their lessons. Perhaps this will lead to
an upsurge in Scheme usage, but if it does, I'm betting we'll then see versions
of Scheme that are loaded with sugar. Not surprisingly, there's actually a
version of Scheme called Sugar.
Besides, part of what this discussion is about is what people are actually
using, as opposed to being taught. People are using Perl and its ilk
despite a near-total lack of support from the educational system, and mimimal
support in the business community. That's pretty interesting in
itself.
That doesn't say anything against Perl but, thankfully, Perl is
not being taught in American schools yet. :)
Amen to that! I'm proposing that Perl's success can be learned from,
rather than that we should all adopt it as the programming language of the
millenium.
Sixth, just on the merits of the language, FP must be easier to
teach because you can formalize exactly what the reduction rules are, show
how the program changes during evaluation, etc. Perl, OTOH, is an
implementation-dependent mess, full of poorly specified, ad hoc, irregular
behavior. I just can't believe that Perl would be the more easily grasped.
To the non-mathematically-inclined, those reduction rules
might seem as arbitrary as some of the features of Perl. Besides, people
aren't really taught Perl, they absorb it over time. They ask friends,
colleagues and online sources for tips, or they cut and past other
people's code, even when they don't fully understand it. They shoot
themselves in the foot, bandage it up, and then add another leg to make up for
it. In fact, my experience has been that for the average programmer, far
more real learning of a language occurs this way, than in any classroom.
Mathematics can give us leverage - for example, the ability to concisely specify and explain a language,
and understand its characteristics. But trust me, most of the people I
work with get no such leverage from mathematics. The McDonald's quarter-pounder you mentioned
is exactly what they want - they can grab it with both
hands and take a bite out of it. They don't need a plate or a steak knife,
and they don't need to slice off manageable pieces first - they just shove that sucker
into their mouth as far as it'll go, and chow down! (Uh-oh, back to the
metaphors...)
Perhaps I'm not giving "them" enough credit,
but this really isn't meant as an "We're better than...", even though
it probably comes across that way. The people I'm talking about aren't
staying up at night to write long messages about computer languages on weblogs,
they're just trying to get a job done in the most expedient way possible for
them . They way they do
it may involve redundancy, and it certainly doesn't involve solving an entire
class of problems by solving a meta-problem, but they don't know that, and
would only care if you could give it to them in a
box. If you suggest to them that they could learn more about programming from one of the
excellent pedagogic books on Scheme, they look at you funny and enroll for a course in
C++ at the local community college instead, even if their day job is writing Visual
BASIC.
So are you saying it is an apples-and-oranges thing, and I am
picking fights with the wrong people?
Perhaps. I've kind of lost track of the final goal
of this discussion, but it's been interesting. I'll try to clarify.
Or are you saying that academics should dumb down their research
to make it more accessible?
Definitely not. I think to some extent, the
availability and accessibility of so much good information on the Internet will result in faster trickle-down
than previously. The smart hackers will pick up on this stuff, and
as you say, people will use languages like OCaml because they
like it and they can. That will spread, and we'll eventually find out what a
"popular" functional language really looks like.
Or are you saying perhaps that for each
"abstract" programming language, there should be one elegant,
streamlined, general version for the engineers, and one redundant,
sugarified, specialized version for the hobbyists ("programmers?"
what is a good name?)---sort of a domain-specific language, if you will.
Yes, something like that. I think something like this is happening, and I described the way I think it may evolve, e.g. on top of sophisticated virtual machines or intermediate languages. You
might even find that once the sugary human factors are more evolved and streamlined, that you start finding some of them useful yourself. I'm not talking about anything that couldn't be done with a sufficiently powerful preprocessing mechanism and some good
libraries.
Speaking of preprocessing, the Scheme/Lisp macro capabilities are arguably a sort of academic version of catering to human factors - after all, anything you can do with them, you can write in pure Scheme too. It's quite possible that this is the sort of thing that will help bridge the divide between the camps.
To try to
summarize:
1. It's misleading to extrapolate one's own
experience as a smart formal practitioner and assume that people in the
"real world" can benefit directly from the things that give you
leverage. I'm not saying it can't happen - certainly, if you could get
people to go to college and take the "right" courses, it would help - but that's not happening right now.
In fact, do you think your teaching of Java really helps? No
criticism intended, but my point is that learning Java doesn't teach
people how to solve meta-problems, that's for sure, unless you teach them to generate Java code. Still, none of this is intended to suggest that
you shouldn't try to teach or otherwise propagate the One True
Way.
2. The languages that have become popular in the
"real world" have something to offer in terms of insight into how
people interact with computer languages, when not rigorously trained. It may
be necessary to do some digging to separate the important bits from the fads
and legacy cruft that everyone just gets used to and accepts, but the sorts
of things I'm talking about are mostly small things, as I said - shortcuts to
help do things that people often want to be able to do. Examined
individually, these features may not seem compelling - it's only when taken in toto do
they amount to anything significant. The functional languages are
mostly lacking in such features, but that's because they're generally targeted at a different
application space, that traditionally has been more mathematical in the first
place.
3. Yes, syntax is the last refuge of linguistic
luddites!
;)
|
|
pixel - Re: The History of T
12/18/2001; 3:40:38 AM (reads: 5464, responses: 1)
|
|
Example #2+: the ability to easily interpolate values within
strings, to embed blocks of text within a program (like HTML or SQL), to open,
loop through and read a file with hardly any code, etc. - all ordinary, boring
things that people do every day, and for which shortcuts are helpful.
Truly agreed. I wonder how long it will take for every language to have Ruby's:
File.open("/usr/local/widgets/data").each{|line|
puts line if line =~ /blue/
}
Something like this only need anonymous function (or better closure) and some
OO/mixin, so most languages can provide this. Why don't they?!
Or are you saying that academics should dumb down their research to make it more
accessible?
Definitely not. I think to some extent, the availability and accessibility of so much
good information on the Internet will result in faster trickle-down than previously. The
smart hackers will pick up on this stuff, and as you say, people will use languages like
OCaml because they like it and they can. That will spread, and we'll eventually find out
what a "popular" functional language really looks like.
As for me it's not "so much good information". I've been looking for
readable/understable documentation on various things. It's not easy at all:
most papers i find are stuffed with demonstrations too complicated for me, and
not interesting since i don't care about them. And examples are really sparse.
|
|
pixel - Re: The History of T
12/18/2001; 3:49:31 AM (reads: 5443, responses: 1)
|
|
Something like this only need anonymous function (or better closure) and some
OO/mixin, so most languages can provide this. Why don't they?!
uh, that's not simple after all...
- C++/Java are discarded cuz not having anonymous function (well java has
inner-classes but it's ugly).
- Scheme discarded because it doesn't have mixins for overloading the "each" (?)
- Python used to be discarded because its "lambda" is too poor. Python 2.2
now has iterators
So who's left
- Haskell
- OCaml, but its default library doesn't use the classes which would give it this expressivity.
- Perl, Ruby, Smalltalk
- ???
|
|
Ehud Lamm - Re: The History of T
12/18/2001; 4:01:47 AM (reads: 5492, responses: 0)
|
|
There's always a delay between truly innovative research and wide spread understanding. LtU tries to narrow the gap, by linking to readable interesting researhc papers, to be read by both academics and skilled professionals, who can then carry the word.
|
|
Ehud Lamm - Re: The History of T
12/18/2001; 4:06:18 AM (reads: 5499, responses: 0)
|
|
My point of view is that we are lacking good (formal and empirical) research into issues of syntax and programming-language learning. This is part of the reason debates on these issues tend to become religious wars.
Alas to research these issues, you not only need pl theory but also tools from cognitive science and linguistics.
|
|
Anton van Straaten - Re: The History of T
12/18/2001; 6:50:30 AM (reads: 5433, responses: 1)
|
|
I agree that solid research into computer language usability seems to be lacking - although I must admit, I haven't looked for any! I wonder...? One problem would be that it would require studies of actual users, and long-term studies at that, since how someone reacts to a language in the lab may not tell you much about what they do in the real world.
The approach today is that individuals or groups try to design a language according to the criteria they consider important, and evolutionary/market forces "grade" the result. I don't expect that to change much anytime soon, but I think language designers who care about the propagation of their languages into broader communities, could do a better job of catering to what their "customers" actually want. Of course, most academic language designers have a different goal, which is roughly to come up with the most powerful and expressive mathematical formalism possible.
|
|
Anton van Straaten - Re: The History of T
12/18/2001; 7:04:31 AM (reads: 5438, responses: 0)
|
|
As for me it's not "so much good information". I've been looking for readable/understable documentation on various things. It's not easy at all: most papers i find are stuffed with demonstrations too complicated for me, and not interesting since i don't care about them. And examples are really sparse.
The academic papers can be difficult to use, and a big reason for that is unavoidable: they tend to build on a common knowledge base shared amongst the academic community. If you're interested in programming languages, that's one of the best reasons to learn languages like Scheme and ML, and formalisms like the lambda calculus (which most programmers can understand without too much difficulty, honest). These go a long way to helping to understand many academic papers in the field.
An area which I get the sense may be lacking in good, detailed info is the middle ground between explanations for newbies and dissertations for experts. There are often good (paper) books on such topics, though.
Still, there's good stuff out there, it just isn't always easy to find. You can't necessarily type a phrase into Google and get what you want: you may need to hunt it down through references, ask around, etc. You also have to ask people involved in the area you're interested in - you could ask me something about programming languages and I'd have a good chance of being able to point you to something useful that I already know about, but ask me about virtual memory systems and I'd have to search for it myself.
Throw out some topics, who knows, someone here might be able to help...
|
|
Anton van Straaten - Re: The History of T
12/18/2001; 7:12:43 AM (reads: 5421, responses: 1)
|
|
|
Ehud Lamm - Re: The History of T
12/18/2001; 7:36:27 AM (reads: 5471, responses: 0)
|
|
It's indeed a great list of resources.
|
|
Ehud Lamm - Re: The History of T
12/18/2001; 7:53:42 AM (reads: 5477, responses: 0)
|
|
The evolutionary approacs isn't the same as research, of course. Notice how many think that there's no evolution... Many new languages simply sell the same snake oil in new bottles.
The problem is that language usability studies are not exactly CS. Maybe that's part of the reason university faculty hardly ever works on these issues. Most research attempts come from big industry and government. Like all research most of this work is useless, but there were some advances (Well, I hope so. I can't think of any specific example at the moment )
|
|
Ehud Lamm - Re: The History of T
12/18/2001; 8:01:11 AM (reads: 5462, responses: 0)
|
|
Is the problem with Scheme here really mixins?
Seems to me you'd basically return the appropriate each function (i.e., closure), or am I totally of base?
But isn't the natural Scheme approach call-with-input-from-* ?
|
|
pixel - Re: The History of T
12/18/2001; 9:51:57 AM (reads: 5419, responses: 1)
|
|
Throw out some topics, who knows, someone here might be able to help
Here i go :)
|
|
Anton van Straaten - Re: The History of T
12/18/2001; 11:33:07 AM (reads: 5405, responses: 0)
|
|
imperative language with type & inoutness inference. eg:
If you're asking questions like that, I don't think you can expect to have the answer handed to you on a platter. I'm not enough of an expert on this subject to say anything very intelligent about it off the top of my head, but one paper that might be useful here is ML and the Address Operator, which describes how to provide a more transparent variable mutation feature in ML. It might even address the question you're asking directly, I'm not sure. This paper contains some hairy-looking typing rules, though. I don't know how familiar you might be with type judgement notation - a good intro can be found in Polymorphic Type Inference.
designing library using generics, aka answering questions like:
- are "sets" useful in practice?
Um, yes? ;) I'm not sure what you're thinking of.
- should one be able to write "set_a < set_b" for testing inclusion (means partial order),
- whereas "list_a < list_b" is usually defined using an order on the elements
I guess I would look at how existing libraries for languages that provide generics have handled this sort of thing - Dylan maybe, or even STL in C++.
Actually, I think your question relates to one of the arguments sometimes used against generics - that overloading an operator or function with slightly different meanings for different types of parameters can create confusion, and be more trouble than it's worth.
I'll pontificate a bit, but I'm by no means an expert. It seems to me that function overloading can serve at least two different purposes: one is a really useful one that allows generic programming, where you can write generic code that says "a < b" and know that no matter what a & b happen to be, as long as "<" is defined for them, a meaningful operation will occur and the result will be what your program expects to receive.
There's another sense, though, which is one of syntactic convenience - the sort of thing that probably makes Frank grind his teeth. ;) In this sense, it might be convenient to have "<" work in different ways in different contexts, and that's exactly the sort of thing that Perl does. I'm betting you won't find much academic research on that subject, though, because it's totally unmathematical - why would you call two different operators by the same name?
You might get better answers to questions like these on comp.lang.functional or something like that, if only because of the larger audience, but I'm guessing you may already have tried that...
|
|
Ehud Lamm - Re: The History of T
12/18/2001; 1:11:59 PM (reads: 5464, responses: 0)
|
|
I am not sure I understand your questions re libraries. Are you looking for theoretical answers to the qestion is sets are useful in practice? (This all started by talking about research).
If you are asking from a practical SE perspective, I think one should really study the STL. I think, that all things considered, it is a great success. The generic approach wins big.
The notion of "set" seems to me to be a very useful abstraction. Indirect proof? See how many data structures were developed that implement the Set ADT (see Rivest et. al. for example).
I can say more on this topic (I do teach SE...) but I am not sure how this relates to PL theory, so perhaps I didn't understand your question.
|
|
pixel - Re: The History of T
12/18/2001; 1:51:03 PM (reads: 5401, responses: 1)
|
|
ML and the Address Operator seems interesting, thanks.
Dylan maybe, or even STL in C++
IMO STL is quite poor. I've just stumbled on the fact that "+" is not defined
on vectors, nor does it exist under any name.
I also had some experiences with "string"s (eg: try to do the equivalent of
"strnmp(a, b, strlen(b))" aka "$a =~ /^\Q$b/")
I'll try Dylan...
The notion of "set" seems to me to be a very useful abstraction. [...]
I can say more on this topic (I do teach SE...) but I am not sure how this relates to PL theory, so
perhaps I didn't understand your question.
agreed this is more practical than theoritical (unless you go and see wether
modulo(-3, 2) should be 1 or -1, or wether gcd(0,0) should be 0 (cf the thread
on haskell's ML))
As for "set"s being useful, I wonder why haskell doesn't have one, and
plugging one needs a new whole set of sugar (filter, map, foldl... being
already in use)
|
|
Frank Atanassow - Re: The History of T
12/18/2001; 2:30:29 PM (reads: 5417, responses: 0)
|
|
OK, this is going to be long - brevity would take more time.
Who was it that said, "I should have written you a shorter letter if I had had
more time"? Well, the same holds for me.
I've kind of lost track of the final goal of this discussion, but it's been
interesting.
Indeed, I'm getting fuzzy on that too. Let's try to pull it back to where it
started. You said FP had had no effect on commercial development. I countered
by saying that GC and closures originated with FP languages.
the presence of garbage collection doesn't really represent a functional
feature that's been transferred into the mainstream, regardless of where
garbage collection originated
Indeed, it is not a functional feature, but it did originate with LISP. You say
that Smalltalk also had it very early on, and that's fine. You say that OO
languages have had a bigger effect on commercial development, and I cannot
argue with that. However, that does not prove that the the presence of GC in
certain OO languages is what influenced the current presence of GC in OO
languages like Java, Perl, Python and Ruby (OK, maybe Perl is not OO, but who
cares).
I really think that GC is a much stronger thread in the evolution of FP
languages than OO ones, partly because there are practically no FP ones without
it, but plenty of OO ones, and partly because you cannot implement higher-order
(non-linear) functions without it. But if you want so hard to say that it is OO
which popularized it then, fine; in the popularity game, FP always loses
anyway.
Next you are going to tell me that higher-order functions were popularized by
OO!
even closures, which are more purely functional, were mediated through things
like Smalltalk's code blocks, and were not typically seen as a fundamental
language feature, but rather as a possibly useful adjunct
See, what did I tell you? :)
For me, higher-order functions are by definition an FP feature. If you can
define the functions curry and uncurry in Smalltalk (I don't know if you can; I
am not intimately familiar with ST, and even less with its block feature,
though I've seen it), then as far as I am concerned it is an FP language.
Oh sure, what popularity it has it has because it's OO, and, if I were to
accede that closures in Java, etc. exist there because of ST and not FP
languages (which I don't, because I think many more programmers learned of
closures from LISP or Scheme than from ST or the `niche' languages someone
mentioned), then I would conclude that the popularity of OO is responsible for
the emerging popularity of closures. Let's say this is true.
Then we have to go waaaay back to your first post here to see if the claims
which I have provisionally granted you actually support your original
thesis. Hm, what was it? You said:
But it's only been 40-odd years since the first "high level" language was
invented, and there's plenty of stuff that still has to
trickle. Incontrovertible proof of this can be found in the fact that the
entire concept of functional programming has virtually *zero* presence in the
commercial world, some few special-case exceptions like Erlang notwithstanding.
Waitaminute! What's this?! You were arguing that high-level features haven't
trickled down to the mainstream, period, and used FP as incontrovertible proof
because it has high-level features. But now you are arguing that high-level
features like GC and closures exist in the mainstream, and that they were put
there by OO languages descended from ST! :)
Anyway, let's get off the subject of who was there first, LISP or
Smalltalk. History is quirky; if you are going to pull the popularity bit on
me, I will never deny it; and finally, who cares where GC and closures first
came from? These days I'm more interested in dependent types and polytypic
functions, and I hope you will not argue that those have already been
popularized by FORTRAN...! ;)
(BTW, that reminds me. Algol 60 (1960) had closures. In fact, Algol 60 is
famous for being the
first language to support unrestricted beta-reduction. But I promised to get
off this subject...)
All I'm talking about are a collection of little practical things that it
offers, which allow people to do the things that they need to do with
it. (Examples will follow.) Now, in theory, you can provide some of these
features in libraries (but not all). Still, that doesn't rise to the level of
convenience that it can when an operation is syntactically embedded in the
language.
OK, I am going to argue eventually that most of these syntactic conveniences
are only convenient for writing bad programs. But first I have to take
exception with your first example.
OK, example #1. This may not be the best example of what I'm talking about, but
it relates to another point you raised about average people grasping "let". I
agree that let as a concept is not a problem, but I'd like to examine the
practice of it. In Scheme, for example, you have let, let*, and letrec, all of
which have very specific purposes.
Indeed, I also think this feature of Scheme is unnecessary. In Haskell, there
is only one kind of let, and it always behaves like letrec.
I've seen functional advocates complaining about C++ allowing variables to be
defined anywhere in a block, for example. But why shouldn't you be allowed to
do that? What does it matter, to the programmer, if the compiler writer has to
jump through hoops to convert everything to an essentially functional SSA form
before processing it further?
I don't see anything wrong with this feature of C++. The scope of a variable
declaration is all the statements after it up to the end of the block. I don't
see anything "unfunctional" about it at all. And indeed I think the compiler
should jump through hoops for the user.
If I'm missing something, you had better point it out to me.
Now, I'm sure you can point to features in any language that may seem pointless
to the user; but in the user-oriented languages, those features are typically
there because someone wanted them, not because they're dictated by an obscure
branch of mathematics that has a really scary name
That's a real cheap shot, Anton. Features requested by users for languages like
ML typically do not get rejected because of slavish adherence to some arbitrary
mathemagical (sic!) ideal. The designers are not so short-sighted. They get
rejected because they break properties that the language is designed to
respect, because they make them easy to reason about; or they get rejected
because they are redundant; or for the same reasons as they get rejected from
ad hoc languages.
A big difference between formally-based languages like ML and ad hoc languages
like Perl is that users of the former, though they may not realize it, benefit
every day from the fact that properties like principal typing or referential
transparency are respected, while users of the latter are so used to having the
semantic ground shot out from under them by features which don't respect any
simple properties, that they have been conditioned to rely only on low-level,
implementation-dependent descriptions of behavior, and consequently cannot even
see any opportunities for higher-level abstractions.
Example #2+: the ability to easily interpolate values within strings, to embed
blocks of text within a program (like HTML or SQL), to open, loop through and
read a file with hardly any code, etc. - all ordinary, boring things that
people do every day, and for which shortcuts are helpful
Let's take these one at a time. Now I am going to fulfill my promise to suggest
why these syntactic shortcuts are ultimately not very useful.
First, string interpolation. This is mostly useful when you are outputting
structured things like HTML or SQL, as you say. The problem is that when you
embed a string into a string, you have lost the original structure. A better
way, and I think the way every experienced Haskell programmer does it, is to
create a datatype which represents the abstract syntax of the HTML. Then you
can still manipulate the HTML using its natural structure, guarantee its
well-formedness, etc.
Second, opening, looping through and reading files. The problem with this
special syntax, again, is that it only works for text files, where you treat
each character at a time, or each line at a time. But usually the there is more
structure; when there is, you want to apply a parser to the input, and work on
the AST it produces.
Let me give you another example: lists! Scheme, ML and Haskell have a special
syntax for them, which is convenient, but I think may ultimately be
wrongheaded. It encourages you to force everything into a linear,
right-associative structure, when you could be using better-suited data
structures which are more efficient or better reflect the structure or both.
Here's yet another example: regular expressions. I'm sure I don't have to tell
you, but how many times have I seen people try to do context-free parsing with
regexps?! And the end result is it works on their own test cases, and then
their kludge breaks horribly on a nested expression someone else dreamt up.
The horrible truth is that these conveniences are only the right solution for
the implementor, and not the end-user. Most things in the real world are not
linear, or regular, or character-based. And in those cases where they are, you
will not suffer to use a slightly more verbose or less convenient syntax.
Now you may say, but these people, these non-engineers who don't have PhD.'s
and don't know BNF, just don't know that, and 80% solutions at least let them
get work done 80% of the time. Well, my response is: let's tell them! We can
start by giving them a full set of tools including a power wrench, rather than
a hammer and a couple of screws. And then we can give them an instruction
manual for our power wrench. And if, like most people, they don't bother
reading the instructions, then we can get a graphic designer to make it
sexy. And if it's not sexy enough we can do other things.
Well, that's one idea.
The truth is, I am not so adverse to adding syntax and redundancy for
beginners, but what I am mainly concerned about is that 1) adding syntax makes
the language larger and more difficult to learn anyway, and 2) the language
will be too verbose for them when they grow into experts. Well, then they might
just switch languages, if we use the two-syntax language idea I mentioned in
response to your ideas last time. I am still thinking about that one.
On the three continents I've lived on (granted, one of them was Africa), most
high school graduates don't absorb very much beyond the most basic algebra. Ask
one of them whether they learned calculus, and you're likely to get an answer
like "I think we might have..."
Who are we talking about now? I thought we were talking about programmers, not
marketing majors or stock analysts or sociologists or secretaries. Most
programmers still do come, I believe, from a "hard" technical field like
engineering or biology. They are not going to be completely innumerate.
If we are talking about a programming language for Joe Q. Citizen, then I will
easily concede that ML is wrong, but so is Python and Perl. For this sort of
person, you want something much, much more declarative and specialized, like
CSS, or a visual language.
You dismissed my Stephen Hawking analogy, but considering that the market for
Hawking's book was high school graduate and above, why do you think the
presence of formulae was considered taboo?
Please, I know that many people are afraid of mathematics. But you were saying
that "ML doesn't `sell' [..] because it has `too many formulae'", and I
responded by saying that ML has as many `formulae' as Perl, to the average
person. Are you trying to tell me that regular expression syntax is universally
accessible, or that 5.+(6) is more familiar than 5+6, or that the factorial
function in Java is more understandable than its equivalent in Haskell?
There's a reason that Perl, Python, Ruby, Tcl, and Javascript (1.x) are all
dynamically typed: it's a big part of what makes them lightweight.
Yes, I agree, and I agree with your subsequent comment that these languages
don't scale to large systems. In fact, one of the things I am looking into is
how to layer my language so that the very lowest layer is untyped.
In the business world, many applications have limited scope, so it doesn't
matter if what's developed doesn't solve any other problems or work the same
way as a similar program in another department, or whatever. (Of course, it can
get to the point where it does matter, which is when the consultants with MBAs
and PhDs get called in, and the money flows like water...)
It matters if you want to save money and build reliable systems by not
reinventing the wheel... Nobody hires experts in order to increase
their budget! (Except perhaps the government and the army. :)
I'm proposing that Perl's success can be learned from, rather than that we
should all adopt it as the programming language of the millenium.
Indeed, and I am keen to learn from this. In fact, believe it or not, I read
Perl articles, the Ruby site, and the Python news, including Fredrik's stuff,
quite often because I am jealous of their success and the apparent wide
appeal. As cultural phenomena, they are very successful.
My point is merely that I am unwilling to compromise technical merits for
popularity. If you can show me how to do that, I'm all ears.
In fact, do you think your teaching of Java really helps? No criticism
intended, but my point is that learning Java doesn't teach people how to solve
meta-problems, that's for sure, unless you teach them to generate Java
code. Still, none of this is intended to suggest that you shouldn't try to
teach or otherwise propagate the One True Way.
Helps what? No, it doesn't help them solve abstract problems. I don't think we
should be teaching them Java at all, but it was not my decision. I don't decide
the material we teach, and I don't even have them for the first half of the
course, so there are severe limits to what I can do. I wish it were
otherwise.
BTW, I don't subscribe to a One True Way. I just think typed FP is the best
we've got.
|
|
Frank Atanassow - Re: The History of T
12/18/2001; 3:00:33 PM (reads: 5392, responses: 0)
|
|
It seems to me that function overloading can serve at least two different purposes: one is a really useful one that allows generic programming, where you can write generic code that says "a < b" and know that no matter what a & b happen to be, as long as "<" is defined for them, a meaningful operation will occur and the result will be what your program expects to receive.
The concept that is wanted here is that of algebra homomorphism. For example, the fact that "+" is defined on both naturals and integers is a consequence of the fact that both form a commutative monoid with "0". It's OK to to overload these things on primitives, because you know that the commutativity property will be satisfied, because there is a homomorphism from the monoid with naturals as carrier to the monoid with integers as carrier. This is the algebraic way of stating Barbara Liskov's substitution principle.
The problem for user-defined datatypes is that equational conditions like commutativity are in general not decidable. Datatypes with equational conditions are sometimes called quotient types, so you might find something searching for that. It usually involves dependent types, though, so decidability is already ought the window. In the language I am developing, you will be able to discharge these kinds of equational conditions by providing (statically) a proof, which amounts to a special rewriting system that normalizes values of that "subtype".
There's another sense, though, which is one of syntactic convenience - the sort of thing that probably makes Frank grind his teeth. ;) In this sense, it might be convenient to have "<" work in different ways in different contexts, and that's exactly the sort of thing that Perl does.
Well, it depends. In papers, I often use the same symbol to describe different things, but if I use them in the same scope then I try to make sure they are algebraically compatible. In programs, if they are in the same scope, I think this should be enforced; otherwise, they should be in different scopes, since they name different things.
I'm betting you won't find much academic research on that subject, though, because it's totally unmathematical - why would you call two different operators by the same name?
There is quite a lot of research on overloading... I saw one paper mentioned here not so long ago, in theory, about System CT.
|
|
Frank Atanassow - Re: The History of T
12/18/2001; 3:15:48 PM (reads: 5387, responses: 0)
|
|
As for "set"s being useful, I wonder why haskell doesn't have one, and plugging one needs a new whole set of sugar (filter, map, foldl... being already in use)
Sets are an example of a datatype with equational conditions, since they are commutative, associative and idempotent. Thus to do a fold on a set you need to ensure that the algebra you provide respects these conditions, and that is not decidable. If your algebra does not respect these conditions, then you will get different results depending on how your sets are implemented.
To see why, suppose there were a type of associative list (a monoid). The algebra you provide to fold it has to have a binary operator to replace the "conses", but if it is not associative then the result depends on how you parenthesize it. This is why lists are always right- or left-associative. (These are, in fact, the two canonical ways to normalize a monoid.)
Actually, you need more than a set to have a useful datatype: you need what is called a setoid. This is a set along with an equivalence relation on the elements (to decide uniqueness of an element). In Haskell, you might use a method of Eq, but those are not guaranteed to be equivalence relations, so it isn't bulletproof. (Unfortunately, almost all Haskell class instances need to satisfy equational conditions to behave properly, but Haskell can't express those conditions.)
Oh and BTW, filter, map and foldl are not sugar in Haskell; they are just regular functions. Also foldl only makes sense for lists. foldr is the canonical way to eliminate an algebraic datatype. foldl is a special function which is made possible by the fact that lists are linear.
|
|
pixel - Re: The History of T
12/18/2001; 3:56:47 PM (reads: 5397, responses: 0)
|
|
the fact that "+" is
defined on both naturals and integers is a consequence of the fact that both form a commutative
monoid with "0". It's OK to to overload these things on primitives, because you know that the
commutativity property will be satisfied, because there is a homomorphism from the monoid with
naturals as carrier to the monoid with integers as carrier. This is the algebraic way of stating
Barbara Liskov's substitution principle.
Do you really think there are people which can understand this in the Computer
Science world. I'm a "Computer Science" guy, not totally disturbed by the word
"isomorphism". I vaguely remember hearing about monoids. But I don't have a
clue what a "homomorphism" is. And 99% of the programmers i know don't (I've
just read what you've written, and we had a good laugh, no offense meant, just
culture clash). I've nothing against some theory, but you have to know many
many people do.
As for "+" being overloaded on numbers, as for me, i find it really natural to
overload it on strings, lists... whatever can be added together.
In programs, if
they are in the same scope, I think this should be enforced; otherwise, they should be in different
scopes, since they name different things.
Ever heard of OO ;p
No kidding the whole purpose of OO is to factorize common meaning, but for
"different things" (aka objects). By looking at the code, you can't know
exactly what is going to be done, you only have some idea of it.
Oh and BTW, filter, map and foldl are not sugar in Haskell; they are just regular functions. Also
foldl only makes sense for lists. foldr is the canonical way to eliminate an algebraic datatype.
foldl is a special function which is made possible by the fact that lists are linear.
I don't know what you mean. AFAIK (from looking at the code of foldl/foldr),
all you need is an iterator. As you can make an iterator from a tree, you can
foldl (or foldr) a tree.
|
|
Anton van Straaten - Re: The History of T
12/18/2001; 5:03:40 PM (reads: 5379, responses: 0)
|
|
In programs, if they are in the same scope, I think this should be enforced; otherwise, they should be in different scopes, since they name different things.
Ever heard of OO ;p
No kidding the whole purpose of OO is to factorize common meaning, but for "different things" (aka objects). By looking at the code, you can't know exactly what is going to be done, you only have some idea of it.
You don't seem to be allowing for the distinction I was trying to make, though. Frank mentioned Liskov's principle, which was what I was referring to. In OO systems, it's startingly easy to violate substitutability of objects simply by doing something questionable in an overridden method in a derived class. Whether you like it or not, this will break the behavior of your system, if you try to use an instance of a non-substitutable derived class, in place of its ancestor.
The above sort of OO is, in general, a completely different animal than overriding something like the + operator on numbers, strings and lists. The fact that there's an operator for all these types that has the same name has no real "meaning" in the OO sense, even if it's possible in the language, since you typically can't use such overloading to write useful generic code that deals with such different objects.
That doesn't preclude using operators like + for all those different purposes, but a properly typed language should be able to prevent you from inappropriately mixing the different meanings of the operator. So what Frank suggests about scope, is, IMO, absolutely correct, if applied correctly.
|
|
Anton van Straaten - Re: The History of T
12/18/2001; 7:06:24 PM (reads: 5388, responses: 0)
|
|
If a discussion is a means of compressing a disagreement into the smallest
possible representation (the algorithmic information theory interpretation),
then we're not doing too well so far, but I think there may be hope. Let's
see...
But it's only been 40-odd years since the first "high
level" language was invented, and there's plenty of stuff that
still has to trickle. Incontrovertible proof of this can be found in the
fact that the entire concept of functional programming has virtually
*zero* presence in the commercial world, some few special-case
exceptions like Erlang notwithstanding.
Waitaminute! What's this?! You were arguing that high-level features
haven't trickled down to the mainstream, period, and used FP as
incontrovertible proof because it has high-level features. But now you are
arguing that high-level features like GC and closures exist in the
mainstream, and that they were put there by OO languages descended from ST!
:)
You seem to be misinterpreting what I'm saying. I
didn't say that no high-level features have trickled down, I said that
"there's plenty of stuff that still has to trickle". I also said
that "the entire concept of functional programming has virtually *zero*
presence in the commercial world". Even though both garbage
collection and what of closures is out there may have been features ultimately
derived from functional languages, possibly by way of OO languages, the
"concept of functional programming" did not come along for the ride,
and has "virtually *zero* presence in the commercial world". So
I stand by what I've said, although I didn't state it as precisely as I could
have.
BTW, that reminds me. Algol 60 (1960) had
closures.
Yes, that's where Scheme got them from, as I understand it.
OK, I am going to argue eventually that most of these
syntactic conveniences are only convenient for writing bad programs.
Agreed, but I'm arguing that the ability to write bad programs is a
requirement for a language that will be used by programmers who are not
perfect.
Another aspect to it is that sometimes, the quick and dirty solution really
is the most appropriate. If a language doesn't support the development of
such solutions, that will hamper its acceptance. People will often try
Q&D first, and get serious later, perhaps when they're forced to. If
you don't cater to that, you will be ignored by many people.
someone wanted them, not because they're dictated by an
obscure branch of mathematics that has a really scary
name
That's a real cheap shot, Anton.
I hope you don't mean you took that personally. For the record, I think
the lambda calculus and its many derivatives are some of the coolest formalisms
I've ever encountered. But for most programmers, it's unquestionably
obscure. In my messages here, I've been deliberately taking the position
of the ordinary user, based both on my own experiences with languages, and my
experience with other people's experience of languages.
I agree with you about the reasons that features get accepted or rejected in
languages like ML. But what I'm saying is that sometimes, such niceties
may be ignored in the commercial world; this might even have a cost, perhaps a
significant one. Java, for example, is full of such things, although they
can mostly be defended as compromises necessary to achieve security goals.
they have been conditioned to rely only on low-level,
implementation-dependent descriptions of behavior, and consequently cannot
even see any opportunities for higher-level abstractions.
I'll grant you that too. I don't think we disagree on the facts very
much; it's more what a question of what we think the consequences should
be. So I'm going to skip past your perfectly valid criticisms of the
shortcuts in Perl, to this:
Well, my response is: let's tell them! We can start by
giving them a full set of tools including a power wrench, rather than a
hammer and a couple of screws. And then we can give them an instruction
manual for our power wrench. And if, like most people, they don't bother
reading the instructions, then we can get a graphic designer to make it
sexy. And if it's not sexy enough we can do other things.
Well, sure! The problem is, that's going to be a long, slow, process -
and frankly, the "product" is not yet available. Which language
would you be trying to sell people on right now? I think it's wrong to
assume that because some smart programmers are able to use ML, Scheme, Haskell,
OCaml et al today to get real work done, that the same is true of the broader
programming community.
But, if evangelism or wide appeal is one of your goals, you may - no, will -
need to start doing some of that dumbing down you mentioned earlier.
Who are we talking about now? I thought we were talking
about programmers, not marketing majors or stock analysts or sociologists or
secretaries. Most programmers still do come, I believe, from a
"hard" technical field like engineering or biology. They are not
going to be completely innumerate.
I'm talking about programmers, but not scientific programmers. I think
this may be one of the biggest reasons for the impedance mismatch we seem to
have going here. I work a lot in the financial services industry, for
example - securities, reinsurance, stuff like that. Many of the people I
work with, if they have degrees at all, were arts majors, including subjects
like drama, English, history etc. That's not true when you get to the
people coding derivatives models and so forth, but in back office processing,
there's plenty of that. If you want to talk about "most"
programmers, you have to look at what generates the most demand for programmers,
and that is the millions of ordinary, everyday businesses out there. And
that doesn't count the people who write programs as just one part of their job -
I've seen accountants who program, for example, who have written entire
accounting systems, and higher math is no more a part of their vocabulary than
the drama major's. Perhaps we should be segmenting our space into more
than just two groups - perhaps many more.
If we are talking about a programming language for Joe Q.
Citizen, then I will easily concede that ML is wrong, but so is Python and
Perl. For this sort of person, you want something much, much more
declarative and specialized, like CSS, or a visual language.
Believe it or not, the people I'm talking about are a still a selective
subset of the Joe Q. Citizens. I'll use Slash as an example again - do you
honestly think the author(s) of the original Perl version of Slash would be able
to tell you anything about calculus? (I'd be stunned.)
ML has as many `formulae' as Perl, to the average
person
Ah, but I think you have to understand them better in ML, or something.
The algebraic turn of mind, the topiary bush I mentioned. We need some of
the cognitive research Ehud talked about to address this, since it seems to be
one of our real points of disagreement.
It matters if you want to save money and build reliable
systems by not reinventing the wheel... Nobody hires experts in order to
increase their budget! (Except perhaps the government and the army.
:)
The problem is measuring success or failure. Most companies
never notice that they're reinventing the wheel - if they reinvent it on time
and within budget, they call it a success. Maybe they'll go out of
business if all they do is reinvent wheels, but that happens too. In
business, programming languages alone can't cure the disease, instead, they get
used to mitigate the symptoms.
My point is merely that I am unwilling to compromise
technical merits for popularity. If you can show me how to do that, I'm all
ears.
I think that a sufficiently flexible preprocessor/macro system could go a
long way towards making it possible to have a technically sound underlying
language, with a suitably sugary exterior. You said:
In fact, one of the things I am looking into is how to layer
my language so that the very lowest layer is untyped.
If you were to make a layer something like that available to others as a
compilation target, you would at least have a more sophisticated audience, and
might increase the chances of dispersal.
As I've already mentioned, I think future languages are likely to use a
combination of the two techniques I've just mentioned: preprocessing/macros at
the front end, and a sophisticated and correct underlying platform.
I don't think we should be teaching them Java at all, but it
was not my decision. I don't decide the material we teach, and I don't even
have them for the first half of the course, so there are severe limits to
what I can do. I wish it were otherwise.
I, too, wish it were otherwise. It seems to me you're dealing with
these things roughly the same way I am, i.e. accepting the status quo, but
you're more in denial about it. ;oP
I just think typed FP is the best we've got.
If by typed you mean statically, even with inferencing, I would amend that to
"type and untyped FP". I've yet to see a statically typed system
that doesn't remove expressivity, in the sense of requiring more effort to
express certain constructs. However, getting into the dynamic vs. soft vs.
static typing argument is beyond the scope of this discussion, except in the
sense that a system that can behave in an untyped way currently seems to be an
absolute requirement for a friendly language.
|
|
Dan Shappir - Re: The History of T
12/19/2001; 3:01:03 AM (reads: 5386, responses: 0)
|
|
Truly agreed. I wonder how long it will take for every language to have Ruby's:
File.open("/usr/local/widgets/data").each{|line|
puts line if line =~ /blue/
}
Something like this only need anonymous function (or better closure) and some OO/mixin, so most languages can provide this. Why don't they?!
This is exactly the type of thing Sjoerd and I are trying to achieve with JavaScript. In our case this code would come out something like:
ReadFile("/usr/local/widgets/data").filter(/blue/).foreach(alert);
Obviously, the fact that JavaScript has anonymous functions and closures helps. Also, because JavaScript uses prototypes, adding functionality like foreach() to existing types like arrays is easy.
Our hope is that by building on a familiar base we will be able to seduce developers to a better style of coding. So far, we’ve had little effect on the developer community at large. Maybe when we’ve documented this stuff :)
|
|
Ehud Lamm - Re: The History of T
12/19/2001; 8:57:13 AM (reads: 5397, responses: 0)
|
|
As for "set"s being useful, I wonder why haskell doesn't have one, and plugging one needs a new whole set of sugar (filter, map, foldl... being already in use)
Set is an ADT. You can implement it in a variety of ways (each with its own set of pros and cons). A good language provide features that allow you to provide concerete implemenations of ADTs, from libraries.
BTW: Haskell, being purely functional, leads to non-trivial implementations of data structures. Check out Chris Okasaki's, Purely Functional Data Structures.
|
|
Frank Atanassow - Re: The History of T
12/19/2001; 12:47:12 PM (reads: 5366, responses: 0)
|
|
Do you really think there are people which can understand this in the Computer
Science world.
The answer to your question is "yes". (I am one, I hope, and I talk to others
every day.)
The answer to the question you probably wanted to ask, namely, do I really think every person in computer science can understand this, is "no". (If I did, I wouldn't maintain a list of references on this stuff.)
The answer to the question you should have asked, namely, do I really
think that every language designer should know this, is "yes".
As for "+" being overloaded on numbers, as for me, i find it
really natural to overload it on strings, lists... whatever can be added
together
Yeah, but what can be "added together"? Numbers, strings, lists: fine. But those
examples do not characterize the notion of "addition". If you think OO is a
perfect solution to this problem, then you had better ask yourself whether
every binary function is a notion of addition.
There is a reasonable notion of "addition" which is quite general, and
is not defined by a finite list of examples, but rather a set of properties;
it's called a categorical coproduct, and is a generalization of what you
probably know as variant (or sum) types. Using this definition, if you give me
a binary function of type t -> t -> t, I can tell you whether it is "addition"
or not. Furthermore, when it is indeed a form of "addition", I can give you the
"design contract" for that "class". Even further, the design contract does not
depend on the implementation of objects of that class, but rather the methods.
[Anton wrote:] In OO systems, it's startingly easy to violate substitutability
of objects simply by doing something questionable in an overridden method in a
derived class. Whether you like it or not, this will break the behavior of your
system, if you try to use an instance of a non-substitutable derived class, in
place of its ancestor.
Exactly. This is why OO I said OO is still immature. There are no implemented
OO languages (that I know of) that respect the substitution principle. There
is a way to design OO languages which do respect it; it is just the
dual of algebraic datatypes, and you can read about it, for example, in the
paper A Tutorial on
(Co)Algebras and (Co)Induction by Jacobs and Rutten. OO language
implementers, though, (and even, by and large, OO researchers) are sadly
oblivious to coalgebra.
Also foldl only makes sense for lists. foldr is the canonical way to eliminate
an algebraic datatype. foldl is a special function which is made possible by
the fact that lists are linear.
I don't know what you mean. AFAIK (from looking at the code of foldl/foldr),
all you need is an iterator. As you can make an iterator from a tree, you can
foldl (or foldr) a tree.
foldr is an instance of a function which is generic over all algebraic
datatypes, i.e., if you give me an arbitrary algebraic datatype, I can give you
a definition of foldr on it, and every such foldr obeys a "design
contract". I'm not sure what you have in mind but foldr is always a bottom-up
accumulation. It has nothing to do with left-to-right or right-to-left
traversal. That is an artifact of the linear nature of lists. It's only called
foldr because lists associate to the right, and cons lists are the paradigmatic
example. The foldr of a left-associative list (snoc lists, they are often
called) is still a bottom-up accumulation. (Thus, "foldr" is not the name which
is used in mathematical circles; instead it's called a catamorphism, from the
Greek "cata" meaning downwards and "morph", shape. Sometimes it's just called
"(the witness to) the induction hypothesis", if that helps any. CS people usually just call it the "fold", with no 'r' or 'l'.)
In saying that foldr is the "canonical" way to eliminate an algebraic datatype, I meant that every homomorphism from an algebraic datatype, i.e., every (total, but not necessarily computable) function from an algebraic datatype which respects the algebraic structure, is expressible in terms of foldr. This is part of the design contract for algebraic datatypes. It's important because it means that whenever you deconstruct a list in a terminating program, you can express that deconstruction process using foldr.
Oh, re: the first part of my post, if I might remind you:
Computer Science is no more about computers than astronomy is about telescopes.
-- Edsger Dijkstra
|
|
Frank Atanassow - Re: The History of T
12/19/2001; 12:52:05 PM (reads: 5348, responses: 3)
|
|
Ehud, I don't understand why my HTML anchor links sometimes get reproduced verbatim in my posts, and sometimes not. Is it a bug?
BTW, since this discussion page is getting over 250KB, and the response time is getting intolerable: maybe we can transfer this discussion to newer topic's page on LtU...? Or perhaps there is some option Ehud can flip on Manila to only show the titles of posts...? (I still want to discuss some of these threads.)
|
|
Ehud Lamm - Re: The History of T
12/19/2001; 1:11:37 PM (reads: 5383, responses: 2)
|
|
I was wondering about your links. I am not sure why this happens. Are you use the browser editor box, or writing the HTML by hand? Perhaps whitespaces tickles a bug somwhere.
We should indeed refactor this thread...
My best idea is that we start a new discussion group thread for each live topic. I was going to do that on ADTs, but I was stumped not knowing which paper to link to first...
I am with you on OO, but I am not sure coalgebras are the solution to the reall problem OO tries to solve. Maybe substitution is a by-product, and not the fundamental issue? (I am still thinking about this. I don't claim to have an answer).
Thanks for the linear logic links, by the way. It was just what I needed to finally read Wadler's tutorial. It is great fun. I'd give him ten zlotys...
|
|
Frank Atanassow - Anchors, coalgebras, zlotys
12/19/2001; 3:04:52 PM (reads: 5441, responses: 1)
|
|
(Re: anchors) I do it by hand. I usually use IE to reply from home and, because I am an Emacs user, I tend to hit Ctrl-W to delete things often. This closes the IE window!!! So now I type everything into Emacs first and paste it into the text area.
(Re: coalgebras) If you are doing equational reasoning, substitution is always the fundamental issue... There are other kinds of logic, of course. Anyway, I can't imagine a more basic problem. Maybe you have something specific in mind...?
(Re: zlotys) Haha, I got a good laugh out of that! (But then, I've had about 6 beers already...! Maybe you can tell by all the ellipses........)
Anton: I will reply to you when I am more sober. You deserve a straight answer. (And why are you doing all that financial gobbledygook?!? :) You should come here to Utrecht; Doaitse Swierstra will finally set you straight on the feasibility of FP in the Real World.)
|
|
Frank Atanassow - Hacking around Perl
12/19/2001; 3:13:40 PM (reads: 5405, responses: 0)
|
|
OK, Ehud, it looks like anchors don't work if your anchor element gets split across two lines (between the "a" and "href"). At least, I was able to fix it by re-breaking the paragraph, which eliminated some whitespace like you suggested.
(80% solutions, eh? :)
|
|
Anton van Straaten - Re: The History of T
12/19/2001; 11:22:29 PM (reads: 5347, responses: 0)
|
|
(And why are you doing all that financial gobbledygook?!? :) You should come here to Utrecht; Doaitse Swierstra will finally set you straight on the feasibility of FP in the Real World.)
"Historical reasons". Switching to academia is something I've thought about more than once... I'll be sure and drop in for a beer or six, next time I'm in Europe (last time was 1991, I'm afraid...)
I think FP is feasible in the Real World - but only amongst a subset of the relatively small pool of truly qualified engineers.
I didn't see any other discussion set up yet - let me know if I'm missing something.
(80% solutions, eh? :)
Let us know when you have a message board system written in Arrow, and we'll set it up! BTW, management is saying they want it by next Tuesday... And could you help Fred with the Banker's Trust project, while you're at it? He's falling behind... Can you see why mathematical purity starts falling by the wayside?? I'm off to have a Mexican beer...
BTW, there've been some messages along the same lines as this discussion, on the LL1 list. I'll post web links later on Thursday.
|
|
Ehud Lamm - Re: Anchors, coalgebras, zlotys
12/20/2001; 12:49:02 PM (reads: 5567, responses: 0)
|
|
Progrmmers using OOP are not interested in equational reasoning. I think they are trying to achieve several diiferent goals by usign OOP, most of them related to the idea of interfaces and (data-)abstraction. Problem is most "algebras of abstraction" (for want of a better term) are not implemented or not enough. One thing I think is missing is dealing with abstraction breaking. Inheritance is great way to break contracts, and this is one of its major uses. Obviously, this is dangerous. Inheritance, alas, doesn't make it safer (even with DbC). I started working on this, but I am yet to reach anything really useful (my Ada-Europe'2001 paper is about this.)
|
|
|
|