Hundred Year Language
started 4/9/2003; 4:40:31 PM - last post 5/14/2003; 9:28:50 AM
|
|
Patrick Logan - Hundred Year Language
4/9/2003; 4:40:31 PM (reads: 2907, responses: 42)
|
|
|
Dan Shappir - Re: Hundred Year Language
4/10/2003; 9:54:35 AM (reads: 2938, responses: 1)
|
|
When I say Java won't turn out to be a successful language, I mean something more specific: that Java will turn out to be an evolutionary dead-end, like Cobol.
This is an obvious falsity - Java has already evolved into several new forms: C#, J#, VB.NET and JScript.NET just to name a few ;-)
|
|
Ehud Lamm - Re: Hundred Year Language
4/10/2003; 11:59:59 AM (reads: 2991, responses: 0)
|
|
Perhaps the best thing that can happen is that Java turns out to be a dead end...
|
|
Tim Sweeney - Re: Hundred Year Language
4/11/2003; 5:46:20 PM (reads: 2909, responses: 0)
|
|
This is a great article, the first such set of predictions I've seen that are grounded in reality and history, rather than silly speculation.
The history of mathematics provides a good roadmap, both for the areas where we can expect major progress, and those where we should only hope for incremental improvement.
When I was a kid, I had an old Calculus book from the early 1900's and learned calculus from it, because that's the only reference I had on the topic. When the subject was later taught in school, it was a surprisingly natural transition -- this despite the 100 years of scientific progress that had passed since the book was written.
Language syntax and notation can only be expected to change incrementally. 100 years ago, mathematicians used f(x) to denote "calling function f with parameter x", and they will likely still be doing so 100 years from now. This is almost certain to carry over to future programming, too. The current notations for function calling, infix operators, and definitions won't likely change much. I expect that if in 100 years I showed a programmer "f(x:int):int -> if x<=0 then 1 else x*f(x-1)", he would understand exactly what this code is doing.
The most important change in mathematics over the past 100 years has been in putting it on a solid logical foundation, such as basing calculus on formal limits rather than ill-defined infinitesimals.
I think the most important change in programming in the next 100 years -- and which will happen much sooner than 100 years from now -- is the reformulation of language semantics around well-defined type systems based on mathematical logic. The last 30 years have led to incredible progress in research here, but the results haven't been picked up by mainstream languages, though R&D languages like Haskell based on sound type systems are gaining ground among certain groups.
C# is a great illustration of the superficiality of popular language progress. It is the most polished and fine-tuned mainstream language ever created, yet fails to come to grips with basic mathematics type and theory. You still have absurdities like 2^64=0, 7/2=3, simple expressions like x/y and a[x] which the compiler accepts yet aren't actually typesafe in any reasonable definition of the term, arrays have serious typing flaws, and the basic confusion between things and references to things remains.
Right now, the open language lineages are:
- LISP: This is the apex of untyped programming with nested scoping and metadata manipulation. It's at the end of the line, not because of lack of potential, but it's so simple and powerful than any improvements to it can be implemented in LISP itself without affecting the core language. In 100 years, after the last C and C++ programmers are long gone, there will still be LISP enthusiasts. But don't expect large-scale software development to happen this way.
- C, C++, Java, C#: The problem nowadays isn't with implementation but with the underlying flaws in the paradigm. We can continue to go the route of adding tweaks as with C#, but they're asymptotically approaching a limit that's not far from the current state of affairs.
- Python, Perl, etc: Languages with significant improvements in usability for certain audiences, and are generally a mix of untyped LISP features with typed C/Java-style features. The primary innovations I see here are in the syntax and libraries, for example Python's pioneering of "pseudocode that runs" and Perl's first-class regular expressions and text parsing capabilities. Future languages will certainly inheret these features, even as the underlying paradigm is changed.
- ML, Haskell, simply-typed functional languages with type deduction and, in general, programs consisting of a whole bunch of top-level declarations. The elegance of these languages is based on Hindley-Milner and with most of the neded extensions of the language (subtyping, dependent types), that breaks, and the resulting declarations and error messages become quite a mess. These languages will certainly remain useful to their current fans, but I see no hope for this style of programming to scale up to mainstream development.
So which lineage will be the basis of languages 100 years from now? I think the syntax has long since been determined, and will fall very much in the middle ground between Haskell, Pascal, and C++ -- and not very much outside that triangle.
Regarding the type system and semantics, I think the basic principles in such mathematical-style code as "f(x:int):int -> if x<=0 then 1 else x*f(x-1)" will not change one iota in 100 years, but that the current notions of objects, references, member functions, exceptions, however-many-bit integers, integer being "the" type of 7, and so on will seem terribly archaic. In this area, I don't think any current lineages will stand.
However, I do think we'll converge on a type system that is not far outside of the past 20 years' research into dependent types, intersection and union types, unification, constraint logic, and proofs-as-programs. These type systems essentially have the expressive power of mathematical logic itself, which is fundamental and not a product of invention or whim.
|
|
Alex Peake - Re: Hundred Year Language
4/11/2003; 6:44:05 PM (reads: 2820, responses: 0)
|
|
IMHO (what counts is Productivity)...
"...most important change...around well-defined type systems..."
Type systems are about bug prevention and early detection, and bugs of the "type mismatch" variety only. Has anyone really studied the cost of early detection vs. productivity loss from not using dynamically typed languages?
"LISP...don't expect large-scale software development to happen this way..."
Why? Why are many fairly inadequate languages more successful than good, powerful languages? It's about 1) Money -- money begets money and Microsoft has a vested interest in maintaining their status-quo of followers, as have the Sun/IBM/Oracle gang as a way of competing with Microsoft. They are the ones with marketing might. They are the ones who communicate the loudest. They are the ones that most "choosers of programming languages" hear. And they are "safe", and 2) the great majority of "choosers of programming languages" are either incapable or uninterested in actually evaluating programming languages.
"...ML, Haskell, (Erlang,) simply-typed functional languages..."
I think the real elegance, and productivity, comes from the declarative style and the pattern matching.
BTW, Smalltalk was conspicuously absent - a wonderfully productive dynamically typed, reflective,... development environment!
Nay, I say, the future is LISP (well Scheme maybe) and Smalltalk (if only someone could afford to Market either).
|
|
Frank Atanassow - Re: Hundred Year Language
4/12/2003; 6:19:14 AM (reads: 2798, responses: 0)
|
|
This is a great article, the first such set of predictions I've seen that are grounded in reality and history, rather than silly speculation.
Funny you feel that way, since you contradict most of his claims in your response.
The most important change in mathematics over the past 100 years has been in putting it on a solid logical foundation, such as basing calculus on formal limits rather than ill-defined infinitesimals.
Actually, as Hilbert discovered thanks to Goedel, mathematics hasn't got and will never have a solid logical foundation. OTOH, you are right that the formalization of logic and proofs have put mathematics on a substantially more solid footing.
C# is a great illustration of the superficiality of popular language progress. It is the most polished and fine-tuned mainstream language ever created, yet fails to come to grips with basic mathematics type and theory. You still have absurdities like 2^64=0, 7/2=3, simple expressions like x/y and a[x] which the compiler accepts yet aren't actually typesafe in any reasonable definition of the term, arrays have serious typing flaws, and the basic confusion between things and references to things remains.
While I can agree about C# being exemplary of superficial progress, I don't find these examples very convincing. Why is 2^64=0 absurd? This is just exponentiation modulo 2^64. Why is division not typesafe? If you think partial functions should not be typeable, then you ought to also reject while-loops and general recursion. OTOH, I can well believe that array dereferencing breaks static typing in C#, though I don't know the details.
LISP: This is the apex of untyped programming with nested scoping and metadata manipulation. It's at the end of the line,
I have a feeling that DT languages are not at the "end of the line" and could go further by adopting online partial evaluation techniques, as in Psyco (or whatever that Python compiler is called).
ML, Haskell... These languages will certainly remain useful to their current fans, but I see no hope for this style of programming to scale up to mainstream development.
Why not? I think they already scale much better than conventional languages.
However, I do think we'll converge on a type system that is not far outside of the past 20 years' research into dependent types, intersection and union types, unification, constraint logic, and proofs-as-programs. These type systems essentially have the expressive power of mathematical logic itself, which is fundamental and not a product of invention or whim.
I must say, I see no hope for dependent types in the realm of programming, because they work against rather than for separation of concerns. (They are still useful for theorem proving, though.) Intersection and union typing have similar issues. I am not very fond of constraint logic because it isn't compositional: it's too easy to add a new constraint which renders the entire system inconsistent. Proofs-as-programs is, of course, an idea I am quite fond of.
Personally, I think the hits of the near future (ten or fifteen years) in the statically typed realm will be staged/reflective programming languages like MetaML, generic programming as in Generic Haskell and languages which support substructural logics like linear logic. I see opportunities for combining these in a powerful way to support lawful datatypes, i.e., datatypes which satisfy invariants in a decidable way. The underlying ideas will probably also pave the way for decidable behavioral subtyping in OO-like languages. A unified foundation for functional, procedural, OO and logic languages is also something I predict we will see soon. The aspect-oriented stuff will be probably sorted out and mapped onto existing concepts in current paradigms. (I don't think there is anything really new there.)
Also, I think that in twenty years there will no longer be any debate about static vs. dynamic typing, and that instead most languages will instead provide a smooth continuum between the two, as people realize that there is no fundamental dichotomy there. Also languages will have more than one level of types: the types themselves will have types, and so on. But we will be able to treat types in a similar way to the way in which we treat values: we will be doing computations with them.
There will be an increased emphasis on efficiency in the future, and I think we will see fewer aggressively optimizing compilers; instead it will be the programmer's job to write his programs in such a way that they are pretty much guaranteed to be efficient with any reasonable compiler. This sounds like a step backward, but it won't be because we will be better able to separate concerns of specification from implementation.
(The reason I see more emphasis on efficiency is partly because I think that wearable computers and other small computers will become ubiquitous, partly because I think we will have the technology to do it when substructural logics invade static typing, and partly because we are starting to understand how to assign static type systems to low-level languages like assembly language.)
Alex wrote:
Type systems are about bug prevention and early detection, and bugs of the "type mismatch" variety only.
Isn't that a tautology? Of course: type-checking fails when types don't match/unify.
The real question is, what is the expressivity of the language of types? If your type language can express arbitrary formulae in logic X, for example, then a type mismatch error of an arbitrary type against the constant "true" is part of a decision procedure for X, so your type checker is actually a theorem prover.
Has anyone really studied the cost of early detection vs. productivity loss from not using dynamically typed languages?
I must admire how subtly you managed to phrase that. Here is a question for you:
Has anyone really studied the cost of late detection vs. productivity loss from not using statically typed languages?
|
|
Frank Atanassow - Re: Hundred Year Language
4/12/2003; 6:46:59 AM (reads: 2758, responses: 0)
|
|
Language syntax and notation can only be expected to change incrementally. 100 years ago, mathematicians used f(x) to denote "calling function f with parameter x", and they will likely still be doing so 100 years from now.
Oh, I don't think I agree with this either.
First, actually mathematics has been going on for rather longer than a century, and the notation has only started stabilizing recently.
Second, I think the reason we have been using this sort of textual syntax is mainly because of the limitations of pen and paper, complications of typesetting and the current emphasis on "document" models of computing interaction.
Actually, I think that program syntax (by which I mean the way a human communicates the program to the computer) may change drastically in the next hundred years. Currently, there is lots of redundant indirection. For example, you have to name the arguments of a function because the behave as placeholders. But indirection is a cognitive burden. As computer interfaces and graphics evolve, I think we will see less indirection somehow. Instead of naming a thing, you will be able to manipulate the thing directly.
Vaughan Pratt gave a great example of this in one of his papers. He noted how program generators invent names to label nodes in a graph, and how edges from one node to another are given by giving the name of the node. But the obvious solution is actually to just draw an arrow from one node to the other. This isn't really possible now, because so many of our tools are text-centered, but in the future I think things of this sort will become common, as we get out of our Emacs/VIM/UNIX mindset.
Similarly, if you have a piece of data in some program which models an image, or a figure in some space or something, I think we will see in the future that the data will be represented by what it denotes, avoiding the indirection. Operations on the data will be lifted up to operations on the denotation, something like in OLE/COM, but without all the boilerplate and fuss which must be programmed up manually. An important idea to make this work is completeness: every allowable operation on the indirect representation will have to be accessible via the interface to the direct representation. (This is where current ideas like drawing trees as figures in program text fall down, I think. You get a drawing of a tree, for example, but you can't manipulate it directly; you still have to edit the code.) To make all this work you need a lot of infrastructure, though.
In summary, concrete syntax will disappear, and we will get used to the idea that a program can have multiple representations, and that the "essence" of a program is its semantics, not its syntax.
|
|
Alex Peake - Re: Hundred Year Language
4/12/2003; 8:16:02 AM (reads: 2740, responses: 0)
|
|
Bravo Frank, a most insightful view and prediction!
On a minor note, the comments:
Type systems are about bug prevention and early detection, and bugs of the "type mismatch" variety only.
Isn't that a tautology? Of course: type-checking fails when types don't match/unify.
Has anyone really studied the cost of early detection vs. productivity loss from not using dynamically typed languages?
Has anyone really studied the cost of late detection vs. productivity loss from not using statically typed languages?
My point was two-fold: 1) static typing adds the ability to catch x% of bugs (x << 100) and dynamic typing adds development speed -- I have found the DT tradoff more productive in my development years (but then I develop business systems, not systems people's lives depend on directly) -- and 2) the tradeoffs are, in most domains, conjecture since there are insufficient studies to support cost savings in either argument.
|
|
Tim Sweeney - Re: Hundred Year Language
4/12/2003; 8:41:31 AM (reads: 2782, responses: 0)
|
|
> Actually, as Hilbert discovered thanks to Goedel, mathematics hasn't got and will never have a solid logical foundation
Please take everything I say here to be mod undecidability, the halting problem, and finite resource limitations on actual computers.
> Why is 2^64=0 absurd? This is just exponentiation modulo 2^64.
While the C definitions of (+,-,*) are consistent with the ring of integers mod 2^64, (/,<,<=,>,>=) aren't. C math is inherently non-mathematical.
This was fine for the "platform-independent assembler" that C was in the 1970's, but in the future I think programming language math will map more closely to actual math.
> Why is division not typesafe?
int divide(int num,int denom):
In C, this is a useful function, but it's not division. It crashes in the case of a zero denominator, and returns mathematically incorrect results when there the numerator is not an integer multiple of the denominator.
That Ackermann(64,64) is an observationally partial function is an artefact programmers much accept if we want the power of being able to write general recursive functions.
But a division function that crashes and sometimes returns invalid results, isn't going to be with us much longer.
rational divide(rational num,rational<>0 denom):
This is a typesafe and mathematically correct definition of division. Type systems in current languages don't support this, but there is a great deal of promising type system research to draw from for the future.
> C# arrays
For example, you can pass an array of integers to a function expecting an array of objects, and try to write a character into it. This is the classic problem of not properly distinguishing a thing and a mutable reference to a thing. A thing is constant and referentially transparent and obeys very useful subtyping and variance rules, for example allowing you to pass an array of integers to a function expecting an array of objects. A reference to a thing isn't like that at all.
> Dependent types
How do you feel these work against separation of concerns?
This answer might partially dependent on what one considers dependent types. C++ templates (yuck) and C# generics could be considered dependent types, with a syntax limiting them to situations that allow decidable typechecking. Haskell, if you get into the implicit universal quantifiers involved in type deduction, has a constrained form of dependent types.
One view on dependent typing is that C++/C# generics are just a limited form of expressing functions that take type parameters and return types or functions as results. There's a direct correspondence between something like:
template<type t> class c {int a; t b;}
And a dependent-typed defintion like:
c(t:type):class={a:int,b:t};
But with dependent types, you have much more power to mix types and values. And, typechecking becomes undecidable in the general case, though the subset corresponding to any existing type deduction / template / generic type frameworks is certainly decidable, so it's at least as expressive as past work.
I don't see undecidable typechecking as a big problem. Current compilers either succeed or fail with a definite error. Undecidability just leads to another category of compiler failure, where the compiler can't accept the program, not because it's certain to be incorrect, but because it's not certain to be correct.
> unification of functional, logic, OO
I agree 100% about the upcoming convergence of these things and find the topic very exciting.
I think, actually, you'll get constraint logic for free as a result here. Yes, it will be undecidable in many situations, but a practical tool regardless; C++ template typechecking is undecidable, for example. Typecheckers eventually become theorem provers in this case, though probably not very powerful ones.
|
|
Dan Shappir - Re: Hundred Year Language
4/12/2003; 12:00:18 PM (reads: 2720, responses: 0)
|
|
>> C# arrays
> For example, you can pass an array of integers to a function
> expecting an array of objects, and try to write a character
> into it.
Ah yes, its well known that arrays are evil. Maybe now that C# is getting generics they'll get rid of array all together. Somehow, though, I doubt that will happen.
One problem I have with Frank's views has to do with "visual programming". I belive that a 2D representation is simply too limiting, but the human brain is limited in its ability to work at 3D (and certainly above that).
I also have a problem with the entire proposition of being able to predict such a far off future. Digital computers might still be used, but other types may also appear. Think "human" brain, or quantum computers, or something totally different.
|
|
Patrick Logan - Re: Hundred Year Language
4/12/2003; 12:38:40 PM (reads: 2746, responses: 0)
|
|
BTW, Smalltalk was conspicuously absent - a wonderfully productive dynamically typed, reflective,... development environment!
Nay, I say, the future is LISP (well Scheme maybe) and Smalltalk (if only someone could afford to Market either).
Python probably belongs in this category too, although it has way more cruft than Smalltalk or modern Lisps.
Very interesting thread here. One hundred years is a long time. I predict the next ten years will be more dynamic along these lines.
Perhaps after that better type checkers (but they'll just be called theorem provers), model checkers, etc. will provide more verification as systems begin to program themselves and each other with less manual input.
|
|
Sjoerd Visscher - Re: Hundred Year Language
4/12/2003; 1:35:29 PM (reads: 2721, responses: 1)
|
|
|
Patrick Logan - Re: Hundred Year Language
4/12/2003; 4:57:32 PM (reads: 2698, responses: 0)
|
|
I am not sure how related this is to Sjoerd's blog post, but his post did inspire some thoughts, posted on my blog too.
I can tie these thoughts in with some other sources that seem more or less related to me.
- Magnus Carlsson writes about Incremental Computations using Haskell. The simplest example of this being a spreadsheet.
- Ralph Johnson and Hiroaki Nakamura write about an adaptive framework for business systems.
These all represent to varying degrees the future of computing systems:
- assembled incrementally
- various participants
- somewhat ad hoc
- the whole emerging from the sum of its parts
And what they have most in common is there is no single "result". Instead the value of the system is the on-going re-evaluation of the previous results.
|
|
Kimberley Burchett - Re: Hundred Year Language
4/12/2003; 6:48:26 PM (reads: 2681, responses: 0)
|
|
I agree with Sjoerd's comment about spreadsheet-like languages. In fact, I started working on something like that for Java a long time ago. I never implemented it (although I built a prototype in scheme), but the writeup might be interesting to some people.
|
|
Patrick Logan - Re: Hundred Year Language
4/12/2003; 7:16:54 PM (reads: 2668, responses: 0)
|
|
Kimberly's Ripple is yet another motivation for simple languages like Smalltalk, Scheme, and Haskell that can be extended to accomodate new kinds of computational models.
I suspect the reason she never implemented it in Java was the requirement to parse all of Java plus the new syntax without any help from the Java compiler itself.
A very clear reason why Java will not contribute much to the Hundredd Year Language.
|
|
Frank Atanassow - Re: Hundred Year Language
4/13/2003; 10:52:07 AM (reads: 2605, responses: 1)
|
|
Alex wrote:
1) static typing adds the ability to catch x% of bugs (x << 100) and
dynamic typing adds development speed
I guess by "development" you mean writing code. As I recall, it is a
well-accepted tenet of software engineering that coding time is dominated
by debugging time. Of course, whether that means you save time still depends
on x.
However, I also find that it is not the case that x << 100; indeed, the type
checker catches almost all my bugs, and I write fairly complex algorithms.
But it is all a matter of how much work you are willing to shift to the type
checker, and I note that few people are willing to do so, even among academic
ST advocates. You can write your code so that the type checker will never be
able to catch your bugs, or you can exploit it as much as possible, which is
what I tend to do. For example, if you use lists where non-empty lists will
do, or you use partial functions like (in Haskell) head or
tail, then you are paving the way for disappointment.
Tim wrote:
While the C definitions of (+,-,*) are consistent with the ring of
integers mod 2^64, (/,<,<=,>,>=) aren't. C math is inherently
non-mathematical.
I don't think it's useful to call C's arithmetic "unmathematical".
First, mathematics is not limited to the usual algebras of integers or
reals. Indeed, I think these are only the simplest kinds of algebras in their
class.
Second, saying that C arithmetic is not what you would like it to be does not
help at all when trying to give a semantic model that actually predicts C
program behavior.
That said, of course I would prefer it if C arithmetic were more regular, or
there were a (standard) alternative which actually did obey the usual laws.
The same holds many times over for floating point numbers. They aren't
reals but, AFAIK, they form a perfectly good algebra. We could replace them
with an exact model, like continuing fractions or something, but that doesn't
address the the performance problem. Conversely, we can pretend they are
reals, but that doesn't address the correctness problem. The only good
solution I know is to treat them as what they are, and if it's too
complicated, then refine and regularize them in future languages (and
processors).
> Why is division not typesafe?
int divide(int num,int denom):
In C, this is a useful function, but it's not division. It crashes in the case
of a zero denominator, and returns mathematically incorrect results when there
the numerator is not an integer multiple of the denominator.
OK, but that doesn't break type safety. It would be unsound if divide returned
a value which was not an int, but if it returns nothing at all, e.g., if it
crashes, then there is no type safety problem. (Whether the error is trapped
or not makes no difference.)
How do you feel [dependent types] work against separation of concerns?
In a nutshell, the reason is that the appropriate notion of equivalence for
types (isomorphism) is "one degree weaker" than that for values (equality);
since dependent types mix values and types, one ends up using the least common denominator, namely equality, for all type expressions, which unnecessarily fixes implementation
strategies and/or forces you to insert lots of coercions between abstract types
which are provably isomorphic.
For example, a typical application of dependent types is to implement total
operations on sized types like integer arrays. An integer array of length n is
isomorphic to an n-fold product of integers.
If you implement sized arrays as products (via type equality), to share the
operations on products with those on arrays, then the normal form of the
type "array int n" has size proportional to n, which is obviously not good
if you are trying to compile, for example, programs for image analysis, or
bioinformatics, where arrays are often millions of elements long.
OTOH, if you implement sized arrays as an abstract datatype, then you have to
insert coercions between it and its representation everywhere. This is a
burden on the programmer, and also has a run-time cost.
For this single example, it may not seem like such a big deal, but if you have
lots of types like this, it gets to be a big problem and dependent types will
not help. I can give you a real-life example if you like.
Dan wrote:
One problem I have with Frank's views has to do with "visual
programming". I belive that a 2D representation is simply too limiting, but
the human brain is limited in its ability to work at 3D (and certainly above
that).
First, text representations of source code are more like 1D.
Second, you are undoubtedly right that humans find it increasingly difficult
to think about things in higher dimensions, but that is really beside the
point. If the semantics of a program is a 2D or 3D thing, for example say you
are modelling a chessboard, then there is really no choice, is there? You can
map a chessboard into a 1D structure (pick a traversal order for the squares)
but obviously there are many ways of doing so, so the extra indirection just
increases the cognitive burden, because you need to add and preserve
information (the ordering) when you transform it.
Third, I'm not really talking about visual programming, per se---that is just
one possible implementation that popped into my head. I'm really just saying
that in the future we will be dealing with things in more direct ways, which
is just another way of saying we will be doing things more declaratively, with
fewer (arbitrary) encoding and decoding steps between the semantics (what we
want to express) and the syntax (how we express it). In other words, the gap
between syntax and semantics will shrink. Exactly how, I'm not sure; it may
involve visual presentations of things, or maybe not.
I also have a problem with the entire proposition of being able to predict
such a far off future.
Me too, and that's why I tried to keep my forecasts to within the next fifteen
or twenty years.
Sjoerd writes:
I think one area of expertise is still missing: continuous data
transformation. Something like XSL, but in such a way that the result changes
automatically when the input changes, as in Excel.
I've heard about this thing called "functional programming" which seems to fit
the bill... :)
I admit I'm being a little facetious, though, since FP in itself does not
solve the efficient incremental update problem (though, neither does XSL nor
Excel). Someday someone will figure out how to use Conor MacBride's type
differential operator
to automate this sort of thing.
|
|
Ehud Lamm - Re: Hundred Year Language
4/13/2003; 12:53:04 PM (reads: 2636, responses: 0)
|
|
But it is all a matter of how much work you are willing to shift to the type checker
It also depends on how expressive the type system is. Frank, you should keep in mind that not everyone works with Haskell. Most type systems are much more rigid, and using them to express subtle constraints is not really a viable option.
Indeed, I think the trend towards more expressive type systems is likely to continue. This may change our preception of DT vs. ST, as some poster said, but it is sure to change our perception of programming (for "our" replace "the people not using Haskell and its ilk").
It would be unsound if divide returned a value which was not an int
Well, it can return Maybe int instead...
|
|
Kimberley Burchett - Re: Hundred Year Language
4/13/2003; 1:50:26 PM (reads: 2607, responses: 0)
|
|
I suspect the reason she never implemented it in Java was the requirement to parse all of Java plus the new syntax without any help from the Java compiler itself.
Actually that's pretty much correct. In response to the trouble I had with implementing Ripple, I started down the road of making a Java macro facility.
My first attempt was Jatha, which has actually met with some success. But I realized that I needed more power than what lisp-style macros could give me, because I needed whole-program rewriting, rather than just AST rewriting.
So that led to my next attempt, which never even got to the point of being usable. I called it Plastic, and the remains of that project can also be seen at my website. There's less to look at there than with Ripple, though.
I basically abandoned Plastic once I saw how dead the OpenJava mailing list was, and realized that any user that would actually be willing to use such a thing would already be using Scheme or Haskell anyway. Java people just aren't willing to write anything but straight Java. Preprocessors are too nonstandard for them.
|
|
Dan Shappir - Re: Hundred Year Language
4/14/2003; 8:34:33 AM (reads: 2571, responses: 0)
|
|
From Sjoerd's blog:
For example, you can sum live data by summing your input every time it changes. But this pseudo code would be much faster:
initial: sum = 0
on add(v): sum += v
on remove(v): sum -= v
I'm not even sure if i'd prefer to program this way, or if I'd just program summation for a collection, and let the compiler figure out the above code.
Actually, Sjoerd, you should be aware that BeyondJS lets you do something akin to that. Specifically, BeyondJS implements streams which are a sequence of events with a collection-like interface. For example, to perform the summing operation:
var st = new Stream();
var sum = 0;
st.foreach(function(v) { sum += v; });
alert(sum); // displays 0
st.push(2);
alert(sum); // displays 2
st.push(3);
alert(sum); // displays 5
Note how the sequence of operations is sort of the reverse of what you would do with an array:
var st = new Array();
st.push(2);
st.push(3);
var sum = 0;
st.foreach(function(v) { sum += v; });
alert(sum); // displays 5
|
|
Patrick Logan - Re: Hundred Year Language
4/14/2003; 9:36:20 AM (reads: 2474, responses: 4)
|
|
st.foreach(function(v) { sum += v; });
The choice of the name foreach is misleading. Otherwise I could read the code easily.
The function is really an event handler, i.e. it is evaluated "for each" element pushed in the future.
So why not call it something like...
st.onPush(function(v) { sum += v; });
Maybe it is just my Scheme bias, but foreach already has a different meaning in several popular languages.
|
|
Ehud Lamm - Re: Hundred Year Language
4/14/2003; 9:47:09 AM (reads: 2533, responses: 3)
|
|
Yes. I too was confused for a second because of foreach. onPush is nicer. If the name should be applicable to different containers, call it onInsert instead.
|
|
Dan Shappir - Re: Hundred Year Language
4/14/2003; 11:30:23 AM (reads: 2593, responses: 2)
|
|
I agree that foreach is misleading in this context. This is the result of giving an event stream (a sequence in time) the same interface as a collection, such an array or a lazy list (a sequence in space).
Note that JavaScript isn't an interface based language (well JScript.NET is, but I digress). Instead each object is just a map, and polymorphism is accomplished by method/property name. So calling this method foreach both for streams and array, as is shown in the code sample, lets me use them polymorphically. For example:
var a = new Array();
x.feed(a);
will work both if x is an array, a lazy list or an event stream. This is because feed is implemented using foreach . For an array or list this code will simply copy the current content into the array. For the stream it will push the content of each new event into the array as it comes in.
I've actually implemented most of the sequence manipulation functions, such as coalesce , fold , collect , on top of each other with foreach is the base. As a result, if X is a type that implements foreach , by deriving from the array prototype it can inherit all the other functions. I actually do something like that for the Stream type.
Calling this method onInsert or onPush isn't right either because for arrays and lists it iterates over the members that are already in, not new once being added. This is why I said the stream and array code snippets were reversed (relative to each other).
|
|
Patrick Logan - Re: Hundred Year Language
4/14/2003; 12:49:17 PM (reads: 2432, responses: 0)
|
|
Calling this method onInsert or onPush isn't right either because for arrays and lists it iterates over the members that are already in, not new once being added. This is why I said the stream and array code snippets were reversed (relative to each other).
I'm not sure it makes sense to call these "lazy" in the traditional computing definition. The language still appears to be strict, right?
In this case, the use of foreach for the different kinds of collections does not seem appropriate.
I can see the advantage of having event handlers on collections and streams, but I cannot see the advantage of overloading the names of the handlers with the act of iteration over elements that already are in a collection.
|
|
Matt Hellige - Re: Hundred Year Language
4/14/2003; 1:45:27 PM (reads: 2640, responses: 1)
|
|
So calling this method foreach both for streams and array, as is shown in the code sample, lets me use them polymorphically.
This is an interesting idea, but I'm not sure I think it's a good one. Doesn't it seem like there's a significant semantic difference between a computation that I can be sure is already finished and one that may finish at some arbitrary time in the future (or may never finish)?
Should I really be able to write:
var a = new Array();
x.feed(a);
and not know whether or not the operation is done? Maybe I'm missing something, or maybe this is idiomatic of JavaScript in a way that, in context, would seem more reasonable...
|
|
Dan Shappir - Re: Hundred Year Language
4/14/2003; 2:03:32 PM (reads: 2497, responses: 1)
|
|
I really haven't explained myself very well, which is an obvious indication that the terminology I've chosen is indeed lacking. Let me try again from the top (and excuse me if I'm stating the obvious):
The foreach method was first introduced into BeyondJS as a functional way to iterate over the elements of a JavaScript array. You pass a function reference to this method and this function is invoked for each element in the array, for example:
[1,2,3,4].forach(alert);
will display numbers 1 through 4. Note that foreach is not a means of activating a callback when new items are added to the array; that is actually a nice idea but one that currently BeyondJS doesn't implement. If and when it does, onInsert could definetly be an appropriate name.
foreach was implemented directly over JavaScript's intrinsic iteration facilities, i.e. the for statement. When Sjoerd and I implemented additional iteration methods, such as fold and filter , we built them on top of foreach . Thus when I implemented lazy lists it occurred to me that if I just implement foreach I get the rest more or less for free (actually for lazy lists it was unfortunately less rather than more).
I'm not sure it makes sense to call these "lazy" in the traditional computing definition. The language still appears to be strict, right?
BeyondJS does implement real lazy lists, and they are distinct from streams. With lazy lists the elements of the collection are evaluated on demand, at least the first time. And we've made the effort to implement all expected features, such as recursive list definition. We've also made the effort to make lazy lists as polymorphic as possible with the built-in arrays. Thus lazy lists also do not implement an onInsert method. Here is another sample usage of lazy lists - calculating factorial:
var facs = (0).lazy(1);
facs.extend(facs.tail().zipWith("*", (2).lazy()));
alert(facs.itemAt(4));
(0).lazy(1) returns the lazy list equivalent to the array [0,1] while (2).lazy() returns the unbound lazy sequence 2,3,4,...
When I added streams to BeyondJS, my original intention was to make them compatible with arrays and lazy lists, to allow them to work together. Making them behave like collections seemed to best way to accomplish this. Also I liked the concept of managing a sequence in time using the same syntax as managing a sequence in space. Sjoerd's recent blog entry as well as the responses in this thread indicate that it was indeed a good idea.
The downside, which was immediately apparent both to Sjoerd and myself was that foreach was being used to install an event callback instead of iterating over existing items. Still, if you think of a stream as a sequence in time, it does IMO make sense. The biggest discrepancy for me is that foreach on arrays or lazy lists iterates over the elements already in the collection while for a stream it "iterates" over elements added after it was invoked.
So, I could add an onInsert method to streams which would reference the same implementation as foreach . I could also try to add an onInsert method to arrays and lists that is different from their foreach method and is invoked every time a new member is added. I still think that using foreach as I have for streams makes sense and provides benefits.
|
|
Ehud Lamm - Re: Hundred Year Language
4/14/2003; 2:24:05 PM (reads: 2490, responses: 0)
|
|
I've been thinking about your argument re polymorphism (in your previous reply) and I think I am convinced. The foreach name is a bit awkward, but is does serve a purpose.
One point to keep in mind here is the BeyondJS is a cross-paradigm library (to coin a phrase). It adds functional programming facilites to an imperative language. So it's natural to have this sort of mismatch.
Perhaps the terminology can be fine tuned a bit though. For example, if you don't view the lazy list as a time varying sequence of objects, but rather as an infinite data structure, foreach makes more sense. The fact the operations with side effects produce the side effects at diffeent times (compared to arrays) is one of those inevitable paradigm mismatches. The same goes for the analog situation in which elements are added to an array after foreach was applied. Obviously, this sort of array mutation is not very functional (i.e., it's impure).
Notice that the problem we are having is due to the use of English terms ("for each", "on insert") that convey subtle semantics. A better notation would solve the problem (think comprehensions for example).
|
|
andrew cooke - Re: Hundred Year Language
4/15/2003; 5:37:39 AM (reads: 2653, responses: 0)
|
|
You could get the best of both worlds by using a different method for the two cases and defining an adapter object that converts one to the other if required (although I don't know how well that is supported in JavaScript). Of course, there are costs - depends on the language and how much you leave to dynamic code.
|
|
Dan Shappir - Re: Hundred Year Language
4/15/2003; 6:56:54 AM (reads: 2404, responses: 0)
|
|
or maybe this is idiomatic of JavaScript in a way that, in context, would seem more reasonable...
Nothing done in BeyondJS is idiomatic of JavaScript ;-)
Should I really be able to write ... and not know whether or not the operation is done?
Whether its a good thing or not is debatable, but isn't that the whole point of polymorphism - to be able to activate code that does things the calling code doesn't know about? Seems to me that, given your sample, whatever code called this code section passing a Stream in x is the one that should be aware of this behavior. I do agree that it does require you to adjust your thinking and can become misleading at times. But isn't that true for any good/interesting PL feature :-)
You could get the best of both worlds by using a different method for the two cases and defining an adapter object that converts one to the other if required
To an extent this is what I meant by:
I could add an onInsert method to streams which would reference the same implementation as foreach.
In JavaScript objects are dynamic, you can add and remove methods and properties at runtime. So an object can act as its own adapter. However, given that I want Stream objects to always integrate with collections such as arrays and lists, and given that this integration is partially dependant on methods defined using foreach, I would end up always using this adapter. Thus the adapter looses its meaning.
BTW, I'm grateful to all of you for your feedback on BeyondJS. Now if only I could get you to download and play with it ...
|
|
Frank Atanassow - Re: Hundred Year Language
4/16/2003; 8:03:03 AM (reads: 2306, responses: 2)
|
|
Notice that the problem we are having is due to the use of English terms ("for each", "on insert") that convey subtle semantics. A better notation would solve the problem (think comprehensions for example).
With all due respect, that would not solve the problem.
What would solve the problem is if Dan gave a contract for the method in question. Otherwise, it isn't clear what the behavior of foreach (or whatever) should be for an arbitrary (including yet-to-be-added) subclass.
Also, after giving the contract, you will have a better idea what to name the method.
I often think that most of the oft-cited extensibility of OO comes from the fact that the method contracts, which are left implicit, are actually being incompatibly revised with each extension.
|
|
Matt Hellige - Re: Hundred Year Language
4/16/2003; 8:41:43 AM (reads: 2381, responses: 1)
|
|
I often think that most of the oft-cited extensibility of OO comes from the fact that the method contracts, which are left implicit, are actually being amended with each extension.
This is a pretty general statement, but I'd certainly agree that it's what's going on here... To my mind (and I believe I'm not alone) this is one of the major pitfalls in inheritance-based design (and designs based on name-overloading in general). You have to be very careful to make sure that functions with the same name (or objects with the same type/interface/whatever-your-language-calls-it) actually implement the "same" operations (or "isomorphic" or "analogous", or whatever).
The trouble is in agreeing on any definition of "same"... Personally, I would be distressed to discover that "completes this operation atomically" and "begins an operation to complete at an arbitrary future time (or never)" are considered the "same" behavior, but as I said, it's all about agreement.
As an example, consider output. Suppose you have a number of objects x, y, z that all implement a print() method that allegedly "behaves analogously", e.g., it prints a string (to the terminal, a buffer, a file, the network, whatever). Now suppose you find out that one of these objects actually doesn't print your string, it stores it internally and may or may not print it later (say, based on whether or not someone ever calls a flush() method). I think many programmers would be distressed by this state of affairs and consider this an unreasonable deviation from a sensible definition of "analogous behavior".
However, this is exactly what many languages provide, notably Java, where in fact it causes great pain when dealing with OutputStreams and Writers. In fact, I've seen bugs take far longer to find than they should've because a debug statement was swallowed by a program crash that occurred after it was purportedly printed (but before it was flushed).
I think name overloading is wonderful, and I don't really have a general solution here (or believe that one is possible), but it's definitely something designers need to be cognizant of at all times. In some sense, there's always already a tension here, in that the whole point of name overloading is to call domonstrably (and usefully) different things "the same".
|
|
Adewale Oshineye - Re: Hundred Year Language
4/17/2003; 3:04:08 AM (reads: 2386, responses: 0)
|
|
I've always felt that the point of name overloading wasn't to just call different things by the same name--that would be confusing and without benefit. The benefit of name overloading is to indicate that these things have a strong relationship. That relationship is usually some kind of familial similarity.
So if I had 3 objects (x, y and z) which have a print method then I'd expect them to behave similarly enough that they could all be considered to satisfy some contract. This contract would be defined, more or less loosely, in a superclass or interface. Ideally there would also be a unit test that enforces this contract. This way I could add another class that that satisfies this contract in one way or another.
So the example makes sense if we infer a contract that's loose enough to permit sequences in time and space.
|
|
Frank Atanassow - Re: Hundred Year Language
4/17/2003; 7:26:48 AM (reads: 2217, responses: 2)
|
|
The trouble is in agreeing on any definition of "same"... Personally, I would be distressed to discover that "completes this operation atomically" and "begins an operation to complete at an arbitrary future time (or never)" are considered the "same" behavior, but as I said, it's all about agreement.
Yes, and that is why programming language semantics is so important, and why it irks me when people give a language implementation without an accompanying semantics. What is considered "same" behavior is defined by a language semantics: two programs are the same iff they are equal under the semantics. In particular, it is impossible to write (useful) portable applications in a language without a semantics.
As an example, consider output. Suppose you have a number of objects x, y, z that all implement a print() method that allegedly "behaves analogously", e.g., it prints a string (to the terminal, a buffer, a file, the network, whatever). Now suppose you find out that one of these objects actually doesn't print your string, it stores it internally and may or may not print it later (say, based on whether or not someone ever calls a flush() method). I think many programmers would be distressed by this state of affairs and consider this an unreasonable deviation from a sensible definition of "analogous behavior".
However, this is exactly what many languages provide, notably Java, where in fact it causes great pain when dealing with OutputStreams and Writers. In fact, I've seen bugs take far longer to find than they should've because a debug statement was swallowed by a program crash that occurred after it was purportedly printed (but before it was flushed).
Yes, if the contract for Java output mechanisms is so weak that you cannot be sure that something is actually even printed (and I am not claiming that is the case, but assuming you are right), then it is basically impossible to write any useful programs which follow the semantics, because they cannot rely on the output actually every being observable. The result is that all useful programs will be non-portable or buggy, because they must depend on environmental conditions which are not captured by the semantics.
However, designing a good, useful semantics for I/O is actually very hard, precisely because I/O usually involves environmental conditions to do with the OS, and so on, so I would not be too hard on Java if its semantics here is very weak.
|
|
Matt Hellige - Re: Hundred Year Language
4/17/2003; 7:50:32 AM (reads: 2281, responses: 1)
|
|
Yes, and that is why programming language semantics is so important, and why it irks me when people give a language implementation without an accompanying semantics. What is considered "same" behavior is defined by a language semantics: two programs are the same iff they are equal under the semantics. In particular, it is impossible to write (useful) portable applications in a language without a semantics.
Well, I'm a pretty big fan of formal semantics, but at the risk of saying something controversial, I think pushing too far in this direction may be a red herring, and this particular case is a decent pivot, I think. It is my strong feeling that programming is, and will always be, at some level, a social undertaking. Standard problematics of language, interpretation and convention are operative here, which among other things, means that overly strident attempts to formalize definitions, specifications, contracts and conventions may be misguided. The value of this example in explicating this point is precisely that a formal semantics would probably not help here. The entire idea of name overloading is to take operations that are not, in the formal sense, precisely the same and draw an analogy between them. At a very fundamental level, this involves a social convention, an understanding (or guess, or theory, or whatever) of who the other readers of the program will be (including yourself at later times), the greater context of the code fragment in question (language idiom, project idiom, stylistic choices...), and so on.
I'm not exactly sure where I'm going with this, but I suppose my point is that there may be limits to the usefulness of formalization of meaning in programming, as there are in language and culture in general. A better-read (and more thoughtful) individual than myself could probably draw fruitful parallels to more general theories of signification, hermeneutics and interpretation.
In any case, I'd like to state unequivocally that I'm a strong believer in semantics for programming languages, and I'm a strong believer in elegant language design. I'm just convinced that it's not a silver bullet, and I believe that sooner or later these issues of convention and interpretation will show up, because I think they're basically built into the kinds of abstraction that programmers love so much.
|
|
Frank Atanassow - Re: Hundred Year Language
4/18/2003; 2:41:36 AM (reads: 2285, responses: 0)
|
|
Well, I'm a pretty big fan of formal semantics, but at the risk of saying something controversial, I think pushing too far in this direction may be a red herring,
Controversial? Just the opposite I would say, as hardly anyone actually wants to apply formal semantics to real problems.
It is my strong feeling that programming is, and will always be, at some level, a social undertaking.
First, are you making a prediction, or simply expressing a desire? Do you want to see formal semantics put to good use, or are you happier with it just being a game for pointy-headed people to occupy their time with?
Second, I think that if you ask mathematicians about their work, many of them will say it has a strong social element. Formal methods and social interaction are not an either-or proposition: that's why, for example, people organize conferences on formal methods, coauthor papers with others on formal methods, and generally work together in groups on formal methods just as they do on software development projects or even, say, -cough- quilt-making.
Third, please pretend you have journeyed back in time to 1950. Eager to impress people with the development of future technology only you are aware of, you tell an acquaintance: "In half a century, things called `computers' will make our lives much easier."
"Golly," your friend says---because they said things like that back then---"you mean you just tell them what to do and they do it for you, like everyone has their own robot servant or something? That sounds wonderful! I would have him write all my school papers for me, and do my laundry and take out the trash, and answer all the questions about the universe which I could never figure out myself."
You reply, "Well, uh, actually you would have to employ a specially trained person called a `programmers' to tell the computer how to do specialized stuff like that (and it can't answer all your questions for you anyhow)."
"Gee whiz, `programmer'? That sounds awfully cold and intimidating. What do they do, exactly, these `programmer'-types, and how do they get the computer to do stuff?"
"Well, they sit all day in front of a keyboard, typing in text written in a special, technical, formal language, which eventually translates into numbers, which the computer processes by arithmetic." Your friend starts looking queasy, so you hurriedly add: "Actually, it's not as bad as you think, though: almost everyone in 2003 spends hours in front of a computer, reading mail messages, communicating via instant messengers, surfing the Internet, chatting electronically, and so on."
"That sounds horrible!" he says. "Why, it's the most inhuman thing I've ever heard. If I wanna talk to a friend, why, I'll go over an' pay him a call personal-like, so he knows I really wanna see him. Or, if he's too far away, I'll write him a letter, in my own handwritin', so he can see I spent some time on him. And the prospect of sittin' in front of a machine every day for hours on end, why, it just sounds like the most lonely, isolatin' thin' I ever did hear. An' these `programmers' sound like automatons, slave laborers in the most horribly repetitive and mind-numbin' factory I can imagine! Pushin' numbers all day, prolly speakin' in numbers too. Are they even human?"
"No, no! It's not like that," you insist, "you can still see friends in person and still write real letters. But that takes so long, and if you can do things more quickly via your computer, it just lets you communicate more (and more often), so the upshot is mostly positive. And, really, programmers are actually quite social. Although they create things which are formal and technical in nature, the way they interact to get those results is organic and creative, though admittedly they have to follow some rules which are unavoidable and imposed by logic, or the computer, or the programming language itself."
"Bah, I don' believe you!" says your friend. "`Computers', `programs', `programmers'... It all sounds like in your future them Russkies already done come over an' turned our wonderful nation into some kinda authoritarian nightmare. When'd you say you came from? 1984? Cough it up, you're a commie, ain't ya, boy? Tryin' ta pervert our beautiful way of life with your ungodly ways. Lemme tell you something, and lemme tell you so it's crystal clear: we ain't never gonna let something like that happen in this here country, by God."
OK, I got a bit carried away there at the end... :)
The entire idea of name overloading is to take operations that are not, in the formal sense, precisely the same and draw an analogy between them. At a very fundamental level, this involves a social convention, an understanding (or guess, or theory, or whatever) of who the other readers of the program will be (including yourself at later times), the greater context of the code fragment in question (language idiom, project idiom, stylistic choices...), and so on.
I disagree. Semantics often give more than one equivalence on things: in one sense they are different, but in another, coarser sense, they may be the same.
So I would restate your claim as: The entire idea of name overloading is to take distinct operations that are in some formal sense the same and draw an analogy between them by giving them the same name.
For example, we use the same symbol for addition of integers and reals because the addition operation obeys the some of the same properties in both cases. Are these two additions the same operation? Certainly not: they don't even operate on the same sets. But they are equivalent in the sense that they're both commutative, associative, etc.
It would be pointless to identify operations (in the same scope) if there were no formal basis for the identification*, because the whole point of using symbols for things which are themselves not symbols (the denotations of those symbols) is to do formal manipulation. *(There is one reason for doing this: when you don't want to manufacture a new name.)
The social aspect here is the choice of name or symbol itself, for example whether you write addition as + or * or & or what-have-you, and, only in cases when it is sensible to do so, the choice of when to identify things rather than treat them distinctly.
In any case, I'd like to state unequivocally that I'm a strong believer in semantics for programming languages, and I'm a strong believer in elegant language design. I'm just convinced that it's not a silver bullet,
It's not a silver bullet, but why does it have to be? Formal semantics won't do your cooking and cleaning or keep you company late at night. So what? But in this case, it is not only applicable, but also far more successful at giving answers to questions than---well, what is your alternative, anyway? Group discussion? Long, hard thinking? Prayer and finger-crossing? Those are approaches, not methods: isn't software development supposed to be an engineering discipline?
I believe that sooner or later these issues of convention and interpretation will show up, because I think they're basically built into the kinds of abstraction that programmers love so much.
Convention is one thing, but problems of interpretation are exactly what formal methods and semantics are designed to solve, and have been successful at solving in all other scientific and engineering fields.
|
|
Frank Atanassow - Re: Hundred Year Language
4/18/2003; 4:05:16 AM (reads: 2176, responses: 0)
|
|
BTW, let me just say that I find this resistance among programmers to formal methods really remarkable and self-contradictory.
First, one thing I understood very early in my programming career is that computers by definition are only capable of doing very banal and repetitive work and that, therefore, if I am doing something which my computer could be doing for me, then I am not only wasting my time, but I am also ipso facto doing something very banal and repetitive. Therefore, as soon as I see how to turn some task over to a computer, I will do it: I want to devote the largest part of my life to doing insightful and creative things, i.e., things which computers are not capable of doing, because this is what makes me a human being, and distinguishes me from a simple machine. It seems to me that every programmer should have grasped this point at some time in their career.
Another thing they should have grasped is that, if you have to do something many times, then it is usually better to factor out the commonality of each instance/occurrence and turn it into an abstraction that you can apply the same way many times.
Now, if formal methods gives me a way of abstracting away from a computing problem which keeps recurring (and I don't see how you can abstract away from something in a constructive manner without formalizing it), then I will use it. That way, I can devote my time to better pursuits. If that formal method is itself computable (as with type inference) all the better: I will turn the entire task over to the computer and let it analyze itself.
What I described in the last paragraph seems to me to be an inescapable conclusion to my earlier to premises, which, as I said, ought to be readily apparent and acceptable to almost all programmers. And yet few programmers are willing to apply or even glance at formal methods. Why? I find it really bizarre. It must be an accidental or historical thing and, if so, it should be possible to overcome it.
Second, people think of formal methods as a mathematical activity, more mathematical than programming at least. In a sense, it's certainly true, because it often has a non-constructive flavor.
But in another sense, it's just exactly wrong: indeed, all programmers have a much deeper commitment to formal methods than mathematicians.
Please try this experiment. Please take your next 20,000 line program to your friendly neighborhood mathematician to admire your handiwork. Be sure to explain in detail the grammar of the programming language, which values get stored in which variables in the store during execution, why you named each source file and variable they way you did for clarity, which loops you tightened for performance reasons and how cache locality influences your clever algorithm so delicately. And don't forget to tell him about all the software you used to produce it, why you couldn't use gcc 3.03 to compile it because of a bug in the optimizer, the URL where he can download the right version and all those wonderful details about how to compile the compiler, set up the right directories, run configure and track down the stupid compilation bug in such and such a case.
After all, these are all formal details. They have nothing to do with what the program does, what result it produces, only how it produces it. Mathematicians love that stuff, right, because they just love to write down pretty little symbols and start pushing them around...
Well, I bet your friendly neighborhood mathematician will be asleep in just over two seconds.
My point---in case you missed it---is that programmers jump through hoops for formality. Nonono: they jump through hoops within hoops within hoops for it. They love it, they live for it; those who don't eventually become mathematicians or managers (yuck).
But don't ever tell a programmer that. Call a spade a spade and all of sudden it's "Get serious, you'd need a PhD to do that!" or "You're trying to take all the fun out of programming!" or "I can't be creative with all that bondage and discipline!"
Bondage and discipline?! Here's bondage and discipline:
Mnemonic Title RFC# STD#
-------- Internet Official Protocol Standards 3300 1*
-------- [Reserved for Assigned Numbers. See RFC 1700 and RFC
3232.] 2
-------- Requirements for Internet Hosts - Communication
Layers 1122 3
-------- Requirements for Internet Hosts - Application and
Support 1123 3
-------- [Reserved for Router Requirements. See RFC 1812.]
4
IP Internet Protocol 791 5
ICMP Internet Control Message Protocol 792 5
--------- Broadcasting Internet Datagrams 919 5
--------- Broadcasting Internet datagrams in the presence of
subnets 922 5
-------- Internet Standard Subnetting Procedure 950 5
IGMP Host extensions for IP multicasting 1112 5
UDP User Datagram Protocol 768 6
TCP Transmission Control Protocol 793 7
TELNET Telnet Protocol Specification 854 8
TELNET Telnet Option Specifications 855 8
FTP File Transfer Protocol 959 9
SMTP Simple Mail Transfer Protocol 821 10
SMTP-SIZE SMTP Service Extension for Message Size Declaration
1870 10
MAIL Standard for the format of ARPA Internet text messages
822 11
-------- [Reserved for Network Time Protocol (NTP). See RFC
1305.] 12
DOMAIN Domain names - concepts and facilities 1034 13
DOMAIN Domain names - implementation and specification 1035
13
-------- [Was Mail Routing and the Domain System. Now
Historic.] 14
-------- Simple Network Management Protocol 15
SMI Structure and identification of management information
for TCP/IP-based int
ernets 1155 16
Concise-MI Concise MIB definitions 1212 16
MIB-II Management Information Base for Network Management of
TCP/IP-based internet
s:MIB-II 1213 17
-------- [Was Exterior Gateway Protocol (RFC 904). Now
Historic.] 18
NETBIOS Protocol standard for a NetBIOS service on a TCP/UDP
transport: Concepts and methods 1001 19
NETBIOS Protocol standard for a NetBIOS service on a TCP/UDP
transport: Detailed sp
ecifications 1002 19
ECHO Echo Protocol 862 20
DISCARD Discard Protocol 863 21
CHARGEN Character Generator Protocol 864 22
QUOTE Quote of the Day Protocol 865 23
USERS Active users 866 24
DAYTIME Daytime Protocol 867 25
TIME Time Protocol 868 26
...
OK, I feel a lot better now.
|
|
Adewale Oshineye - Re: Hundred Year Language
4/18/2003; 6:42:43 AM (reads: 2152, responses: 0)
|
|
Something's been bothering me about this article since I first read it. I think I'm finally able to articulate it now. The article is based on faulty assumptions.
Firstly it assumes that there is a main evolutionary branch and we can see that branch from here. I'd argue that we can only see that the neanderthals were not on the main branch due to the benefit of hindsight.
Furthermore I'd say that sticking to the main branch is only a viable strategy if you expect the environment and the evolutionary pressures it imposes to stay roughly the same. We know this isn't so. The general trend is that teams of below-average programmers (see page 3) are being required to build bigger, more complicated software dealing with ever larger datasets in ever smaller amounts of time. Thus evolution will be selecting for languages that help us better manage and re-use code through libraries, modules and explicit support for versioning. Graham dismisses this as "a sustainable way to write spaghetti code" with very little to offer "good programmers". Unfortunately the one thing we can be sure of is that, through end-user programming if nothing else, we can expect "good programmers" to become even rarer. Add to that the likelihood that large corporations with their infamous lack of taste in selecting programming languages will be funding most development and we can predict that the hundred year language will have more in common with Cobol and Perl than Scheme.
The main evolutionary branch for mainstream programming seems to me to go through C#, Java, VBA, Perl and XML. That's what most people are using/learning. That's where most of the money funding new languages is going. And that's where the libraries are. It's disappointing to accept but evolution isn't pushing us towards more elegant solutions for "good programmers" but towards more convenient solutions for the ordinary person.
The challenge is how to bridge the gap between the kind of languages that the mainstream is evolving towards and the kind we'd like to see.
|
|
andrew cooke - Re: Hundred Year Language
4/18/2003; 7:27:39 AM (reads: 2160, responses: 0)
|
|
Maybe there's a reason that (some) programmers don't like specifications - they don't seem to be easy to learn. As I write this I'm "experimenting" on a Generic Java compiler, trying to work out just what it's supposed to do. I write a program in a certain way, see whether it compiles and, if it does, whether it runs. Then I try another way. It's pretty obvious that I'm constructing my own (somewhat haphazard and informal) specs. So why don't I simply sit down and read the specs themselves?
[One reason is that I can't find them. All I have is a paper (Bracha et al '98 - "GJ Extending the Java...") that gives some examples and a general discussion - it doesn't even have a formal description of the syntax, never mind the semantics. Does anyone know of anything better?]
Another, more interesting reason is that reading specs doesn't seem to "drive things home". You (or, at least, I) can't learn physics or maths by reading axioms. The tedious exercises and tutorials seem to be necessary. My best guess is that the subconscious plays a big role in guiding conscious thought (it's constantly solving the "frame problem", for example) and the only way to get the subconscious to do stuff is by doing the repetitive chores until it "sinks in".
Of course, that doesn't mean I need to use the compiler. We might have a lot less dodgy software if compiler writers made two separate products: a compiler that either generates object code or silently crashes and an interactive learning system that guides you through repetitive exercises on code fragments...
|
|
Ehud Lamm - Re: Hundred Year Language
4/26/2003; 12:20:45 PM (reads: 2073, responses: 0)
|
|
Werner Vogels makes has some interesting comments concerning Paul Graham's discussion of concurrency.
|
|
Noel Welsh - Re: Hundred Year Language
4/30/2003; 1:35:44 AM (reads: 1907, responses: 1)
|
|
I disagree with his assertion that parallelism is hard. Parallelism in a language like Erlang is falling-off-a-log simple. If you get rid of shared memory you get rid of most of the race conditions that cause bugs. Just 'cause Gosling read the wrong Hoare papers (monitors instead of CSP) doesn't mean the rest of us have to follow.
|
|
andrew cooke - Re: Hundred Year Language
5/9/2003; 2:47:19 PM (reads: 1839, responses: 0)
|
|
I disagree with his assertion that parallelism is hard.
To what extent are threads and continuations orthogonal? It seems to me that both are becoming popular to solve similar problems, but the two languages I know of with strong thread emphasis (Erlang, Oz/Mozart) don't (afaik) have continuations.
The most obvious example I can think of for orthogonality is exceptions - easy to do with continuations, but not threads. But then Erlang's emaphasis on self-healing systems suggests that exceptions aren't that important anyway.
But I'm (acutely) aware that I don't know much about this (or the languages involved). Are (many, lightweight) threads and continuations considered sufficiently different to be both useful? In the future will we see more support for threads and continuations dying off as an compromise solution (you might need to assume a future with dataflow variables and/or functional programming :o)?
This has kind-of veered off from the quote above as I've edited it...
|
|
Noel Welsh - Re: Hundred Year Language
5/13/2003; 3:26:33 AM (reads: 1731, responses: 0)
|
|
With threads you can implement continuations. Each captured continuation is a thread. Calling a continuations becomes running that thread. Continuations can be used to implement co-operative threading directly, or preemptive threading if an asynchronous timer is available.
I find both threads and continuations useful. The way I look at it is: each continuation is a program state. Each thread is a VM that runs continuations. Some systems allow continuations to migrate between threads. Witness the recent release (204) of PLT Scheme that includes threads features based on concurrent ML (similar to Erlang I believe) in addition to continuations.
|
|
andrew cooke - Re: Hundred Year Language
5/13/2003; 5:07:57 PM (reads: 1726, responses: 0)
|
|
I agree (that you can implement one with the other), but since you can implement most functionality using either/or, is it necessary to implement both? As far as I understand things, continuations have a cost - either you copy the stack, or you allocate activation frames on the heap or you use spaghetti/cactus heaps. Threads have a similar cost - separate stacks if there's no continuations involved - which "solve the same problem" (store program state), but threads and continuations give you the cost of both (so the question I'm asking is - is that correct?) (I'm assuming that stacks are more efficient than anything else because they're simpler and you reduce garbage collection).
Hmmm. I guess if you allocate frames on the heap then maybe you get threads for little extra cost? In that case continuations cost more than threads, but once you have continuations threads are almost free.
My original reasoning was that you need threads because you want to be able to move programs between one processor, multiple local processors, and processors connected across a network. So why bother with the extra cost of continuations (it seems odd, for example, to implement exception handling as the spawning of sub-threads, but I guess it would work and could be made transparent to the user). But now I'm seeing that maybe you can have both for little extra cost.
I can see that sending continuations between threads could be very powerful - mobile agents!
|
|
Noel Welsh - Re: Hundred Year Language
5/14/2003; 9:28:50 AM (reads: 1706, responses: 0)
|
|
I want both threads and continuation though I'm not sure my reasons are that sound. A few of the differences I see are:
- threads tend to be heavier than continuations (this needn't be true but is a practical limitation of current implementations) but I want to take advantage of SMP.
- Continuations on the same call chain share memory but I don't necessarily want threads to do this. I'm content to achieve all inter-thread communication via message passing.
- I'm not sure I want my uses of continuations to interact with the threading system. I'm not sure what the implications of this are but I can imagine it might cause unusual behaviour (my catch all argument ;-)
|
|
|
|