Home
Feedback
FAQ
Getting Started
Discussions
Site operation discussions
Recent Posts
(new topic)
Departments
Courses
Research Papers
Design Docs
Quotations
Genealogical Diagrams
Archives
Google video recording of Alan Kay's OOPSLA 1997 keynote. Very good!
I've just watched it, and I've already had to reconsider a few long-held beliefs. Maybe Haskell won't solve global warming after all?
Ten years old, and all of his complaints are still true.
I thought that it was amusing that he was waving the AMOP around saying something to the effect of, [paraphrasing heavily] "These people get it! A proper OOPL has to be reflexive an extensible. The C++ and Java crowds don't get these basic points. And it's too bad that Lisp is all ivory tower and not accessible." Slamming the C++, Java and Lisp communities in one swell foop.
A unique sales approach to say the least.
"All you had to do to read a tape was to read the front part of a record [containing the proceedures that interpreted data records on the rest of the tape] into core storage and start jumping indirect through the pointers and the proceedures were there..." I really would like you to contrast that with what you have to do with HTML on the Internet...HTML on the Internet has gone back to the dark ages because it presupposes that there should be a browser that should understand its formats....all you would need would be something like X Windows...."
"All you had to do to read a tape was to read the front part of a record [containing the proceedures that interpreted data records on the rest of the tape] into core storage and start jumping indirect through the pointers and the proceedures were there..."
I really would like you to contrast that with what you have to do with HTML on the Internet...HTML on the Internet has gone back to the dark ages because it presupposes that there should be a browser that should understand its formats....all you would need would be something like X Windows...."
I seem to be speechless...
I wonder what happened when the tape was read by a machine with a different instruction set... But anyway, now we know where the idea for Smalltalk's message dispatch mechanism came from! ;)
...plus a hardware-independent VM, which brings us right back to presupposing that there should be a program which understands standard formats, except now the format is that of a general purpose language, instead of a document markup language. Sorry if I'm stating the obvious.
The difference is that a general purpose language (or VM) can be much smaller and has a better chance of providing true interoperability than does the never-ending treadmill of document formats, ie.
HTML(1/2/3/4)/XHTML(1/2)/CSS(1/2),(Java/J/Ecma)Script(1.(0/1/2/3/4))/DOM(Level 1/2/3)/XML/XSL/XSLT/MathML/SVG/XForms/....
Once you have a language that can draw pixels and capture mouse and keyboard input, you are essentially the interface/document equivalent of Turing-complete. No such completeness can exit with the current web’s document based approach. Give me Objects that know how to display themselves rather than documents which rely on paper specifications being implemented correctly and consistently.
The Java and Squeak VM’s for example, offer much better interoperability than does the W3C document stack (which doesn’t even provide interoperability on the same machine when using different browsers).
Yes, like Java [1]:
* JDK 1.1.4 (Sparkler) September 12, 1997 o JDK 1.1.5 (Pumpkin) December 3, 1997 o JDK 1.1.6 (Abigail) April 24, 1998 o JDK 1.1.7 (Brutus) September 28, 1998 o JDK 1.1.8 (Chelsea) April 8, 1999 * J2SE 1.2 (Playground) December 4, 1998 o J2SE 1.2.1 (none) March 30, 1999 o J2SE 1.2.2 (Cricket) July 8, 1999 * J2SE 1.3 (Kestrel) May 8, 2000 o J2SE 1.3.1 (Ladybird) May 17, 2001 * J2SE 1.4.0 (Merlin) February 13, 2002 o J2SE 1.4.1 (Hopper) September 16, 2002 o J2SE 1.4.2 (Mantis) June 26, 2003 * J2SE 5.0 (1.5.0) (Tiger) September 29, 2004 * Java SE 6 (1.6.0) (Mustang) December 11, 2006 [4] * Java SE 7 (1.7.0) (Dolphin) anticipated for 2008
Also, backwards compatibility of any kind is a problem irrespective of documents or code. Self-displaying documents may implement workarounds for buggy VMs which then don't display correctly when the bugfix is released, objects which know how to display themselves tightly couple the data structure to the display format (what if I want to mine the data and display it differently), etc.?
It's not so simple as it all seems. Hopefully, RDF will help curb the proliferation of XML formats, but document formats are important nonetheless.
[1] http://en.wikipedia.org/wiki/Java_(programming_language)
Drawing pixels for output is far from interface-complete, as anyone with impaired vision will tell you. As soon as you recognise that more than one presentation of the same information is necessary, you need a data format.
It is better to define a presentation as a program rather than a bunch of hardcoded instructions. I think the grandparent post meant that a programming language is more flexible and can cover a wider spectrum of requirements than any specifications designed by a committee.
My opinion is that instead of HTML, we should have had a LISP-like system where each page is a LISP program to be executed at the client.
It is better to define a presentation as a program rather than a bunch of hardcoded instructions.
Even a program has to be written in some language, and that language requires some "hardcoded" primitives to be able to express anything of interest.
I think the discussion confuses two things: expressive power and level of abstraction. Document description languages and the like are defined for a reason, namely to represent documents at a level that is high enough to abstract away the semantics from target-specific processing details. In other words, to separate content and structure from appearence and rendering.
That does not necessarily mean that something like HTML couldn't use more computational power (or better design in general), but still you want its primitives to be at an abstract level. And you will always be confronted with changes in requirements, which force you to introduce new primitives at some point.
Sure, but the program's going to generate data for a presentation layer to present - you can only fuse them if you know the presentation layer you intend to use, if not then you still have to have a document format regardless of where it gets generated.
Postscript is an excellent example of this. I could still print the latest XHTML, SVG, MathML, etc. specs to a circa 1985 Postscript printer (if I had one). People think of Postscript as a document format or as a printer format but it is really a programming language. This has enabled it to support countless types of new document formats, some not even to be created until 20 years later. We haven't needed the Postscript equivalent of CSS/MathML/SVG/Javascript/etc. because the original language was already capable of handling these types of tasks. That is to say that although it didn't directly provide support for these things, because it was a programming language, they could be added through Postscript code, without the need to upgrade or augment the Postscript standard (it has been upgraded twice since 1984 I think but this is much less often than the W3C suite of standards).
An imperative language like Postscript does cause problems though if you want to work with the contents of a document/program, rather than just look at its visual representation/presentation. This is why I instead suggested using an OO language. The idea is the send objects which know how to display themselves, which is slightly different than just sending a program which displays "something". Once you get the objects, you could ask them to display themselves, or you could just work with them directly. Such a system as this would provide a nice separation of presentation and content (something we don't actually have with (X)HTML/CSS/Javascript) and would be open-ended to new requirements without having to constantly add tags/features/supporting-standards, etc.
...that PDF is a more popular document exchange format when it comes to the masses. PostScript is much more capable, but that does not necessarily convert into making it more useful from the standpoint of producer or consumer (or VAR). Personally, I think that we should have more forms of declarative programming - where we declare intention, as opposed to specifying the specifics of how to accomplish the task. (I don't want to have to write a program every time I decide to post to LtU).
Having witnessed the ps-to-pdf transition, I don't think it sheds much light. Working on a Unix machine in the mid-90s, printing and viewing postscript was easy, while PDFs were either not supported, or viewed through the vile Adobe Acrobat and printed via pdf2ps. It always annoyed me when people distributed files as PDF. 10 years later, on a Mac, my situation is exactly reversed: there still isn't a decent postscript viewer, but the PDF viewer is quite pleasant, and PDF is the default format.
My impression is that free Postscript handling on Mac and Windows (gsview, ghostscript) always sucked. Adobe introduced PDF along with a slightly-less-terrible viewer (for users) and silly "rights management" (for distributors), and it was these features, not anything about the format, that made the world change.
One might ask for documents (downloads with a t-shirt on) for much the same reason that one may be willing to restrict oneself to an ascetic FP style (programs with a hair-shirt on) — knowing that certain things can't be done, in the context of unknown sources, may be more confidence-inspiring than knowing that other things can be done.0
Even the recidivist untyped imperative programmer may agree that while using "eval" can make programs much smaller, it also allows for some spectacular failure modes.
On one hand, I note that programming objects is commonly held to be a more difficult task than writing documents — if writing were like programming, beginning writers would have less advice about communicating well, and more advice about how to avoid putting their readers into a state requiring a debugging session (or at least a restart).1
On the other hand, I have more confidence in downloading documents which act as data and require my machines to provide their behavior than I do in downloading objects which treat my machines as data and themselves act as code with potentially unlimited behavior.2
0. King Midas wished to turn everything he touched into gold; programmers wish to turn everything they touch into code
1. Note the common pattern of "data driven" systems when nontechnical people provide the content
2. Perhaps cheap virtualization is a better substitute for a capability system, but at the moment it seems to be even more of a kludge than using document formats to allow data to be data
If you're goal is to make a pixel perfect copy of a master, sure a small program in a language which can draw pixel is enough, but if you want to interoperate with real humans and their weaknesses (short-sightness, blindness): "displaying itself" is quite complex the program which does the adaptation starts to become really complex.
The proof is in the pudding: do you think that users with a disability prefers Flash / Java websites or the ones which are made in HTML?
I never said anything about "pixel perfect" copies. Also, I didn't say to just send programs but rather to send objects which know how to draw themselves. This isn't to say that they would be context-insensitive and would always just display themselves the exact same way pixel-for-pixel no matter what. They could provide viewing environments just as rich, or even richer than, "modern" browsers. Objects could even know how to provide alternative interfaces for themselves (for those with disabilities or alternative/small-form-factor devices). I was thinking about the way that Smalltalk/Squeak deaks with documents as a model and thought that most readers here would have been familiar with that.
My post wasn't reall about Java at all, but rather about favouring directly executable formats, in the form of mobile code, over the never ending sequence of paper defined standards which are never fully/compatibly implemented. Anyway, Java has very good accessibility API's for things like screen readers and braille terminals.
Requiring publishers to know about all the possibly-needed alternative interfaces just isn't going to work unless one of the interfaces is a semantics-preserving dump - that is, just the data. There'll always be a requirement the publisher didn't think of or didn't fancy dealing with.
The objects would be the data so the seamtics would always be preserved. You would be sending the "data" objects as themselves and they would have methods to do things like 'display', 'print', 'say', 'toString', etc. but you would always still have the objects themselves. This is the difference between sending a program with data hidden inside it vs sending data (objects) with methods inside of them. The second approach is what I'm talking about. Rather than sending documents with MIME types or DTD's you would send data/objects with something like a class and classpath. If you meta-browser hadn't seen this class before then it would download it first.
While I agreed with much that he had to say, in this case he was comparing the basic assembly language algorithm (that's what he was referring to when he described it as the first data abstraction) to a markup format. HTML is not a programming language and does not do anything. HTML only corresponds to one third of the model he described and, thus, is not apt for the counterpart in a comparative analogy to an assembly language.
You're right in that what he is comparing is an algorithmic approach to a data markup approach. (This certainly isn't the first time I have seen this particular criticism made against the Web bundle-o-protocols, and it is very probably not the first time such a criticism has been made. (1997?)
The idea is that a browser (or a client in general, or any other remote system) should not have to parse and understand the data it gets remotely, becuase it means that the browser side of things has to agree with the server side of things on what to do with a particular part of the data. It means that a publisher can't be sure that the data is being seen by the clients the way the publisher wants, and that snazzy new media cannot be published because no client would understand it. So, to make the "user experience" more better and to make the system more flexible in terms of the media it supports, all you have to do is pack the code to interpret the data along with the data. It's the basic idea behind Flash and JavaScript and PDF. (In case you have not heard that argument, there have been some folks who wanted to replace HTML entirely with PDF.)
A similar complaint is made by the active networking community: some particular flow would like special handling at the routers between here and yonder, but no existing router supports that algorithm (and this flow is not popular enough; no routers will support it in the forseeable future). But all you need to do is pack some code along with the flow that tells the routers what to do with the packets in the flow.
There are a stack of technical problems with the idea: Anton has already touched on the hardware independent VM needed for the code. The VM's language needs some properties that may not be immediately obvious, at least in 1997: security, reliability, non-Turing completeness (Turing non-completeness?)....
But the biggest problems are not really technical, I don't think. For one thing, passing code with data on the web means that the data cannot be re-purposed---it would be hard to scrape out the current temperature from weather.com, if the code did not provide access. Searching would be fairly difficult; you could not find the statistical properties of the word "frumious" on the Internet if all of the code did not let you get at the text. Accessibility becomes very problematic, a current hangup with Flash, PDF, and dynamic pages.
Now, you could argue that those problems do not have to occur. The code could allow you to get at the raw data, just with additional information. At that extreme, though, the code becomes....markup. Just with a stack of technical complexities and security issues. But I don't think that is realistic. Code is a very definite abstraction and when designing any abstraction, the blank space that you do not fill in is as important of the boxes and arrows that you do. That is the whole point of encapsulation, for example.
Sending around a bag of data does not have that effect, and I would argue that it makes the system more flexible, not less. Likewise, plain-ol-dumb-as-a-stump, best-effort only routers may not be as efficient in some particular case, but they're cheap and we can hope to understand the global effects of local decisions that they make.
[Ok, so I'm not speachless anymore....]
"It means that a publisher can't be sure that the data is being seen by the clients the way the publisher wants,"
is secondary to what I want (as long as what they are publishing is on my machine [in some virtual metaphysical metaphorical way])
Recent comments
22 weeks 6 days ago
22 weeks 6 days ago
22 weeks 6 days ago
45 weeks 12 hours ago
49 weeks 2 days ago
50 weeks 6 days ago
50 weeks 6 days ago
1 year 1 week ago
1 year 6 weeks ago
1 year 6 weeks ago