User loginNavigation |
CritiquesProgramming and ScalingProgramming and Scaling, a one-hour lecture by Alan Kay at his finest (and that's saying something!) Some of my favorite quotes:
And there are some other nice ideas in there: "Model-T-Shirt Programming" - software the definition of which fits on a T-shirt. And imagining source code sizes in terms of books: 20,000 LOC = a 400-page book. A million LOC = a stack of books one meter high. (Windows Vista: a 140m stack of books.) Note: this a Flash video, other formats are available. By Manuel J. Simoni at 2011-08-06 15:47 | Critiques | Fun | History | Teaching & Learning | 89 comments | other blogs | 66836 reads
Memory Models: A Case for Rethinking Parallel Languages and Hardware, CACM, August 2010Memory Models: A Case for Rethinking Parallel Languages and Hardware by Sarita V. Adve and Hans-J. Boehm This is a pre-print of the actual version.
By Z-Bo at 2011-02-27 06:22 | Critiques | History | Implementation | Parallel/Distributed | 19 comments | other blogs | 19714 reads
Pure and Declarative Syntax Definition: Paradise Lost and Regained, Onward 2010Pure and Declarative Syntax Definition: Paradise Lost and Regained by Lennart C. L. Kats, Eelco Visser, Guido Wachsmuth from Delft
I haven't compared this version with the Onward 2010 version, but they look essentially the same. It seems timely to post this paper, considering the other recent story Yacc is dead. There is not a whole lot to argue against in this paper, since we all "know" the other approaches aren't as elegant and only resort to them for specific reasons such as efficiency. Yet, this is the first paper I know of that tries to state the argument to software engineers. For example, the Dragon Book, in every single edition, effectively brushes these topics aside. In particular, the Dragon Book does not even mention scannerless parsing as a technique, and instead only explains the "advantages" of using a scanner. Unfortunately, the authors of this paper don't consider other design proposals, either, such as Van Wyk's context-aware scanners from GPCE 2007. It is examples like these that made me wish the paper was a bit more robust in its analysis; the examples seem focused on the author's previous work. If you are not familiar with the author's previous work in this area, the paper covers it in the references. It includes Martin Bravenboer's work on modular Eclipse IDE support for AspectJ. By Z-Bo at 2010-11-29 17:19 | Critiques | DSL | History | Implementation | Software Engineering | 16 comments | other blogs | 15354 reads
Joe Duffy: A (brief) retrospective on transactional memoryA (brief) retrospective on transactional memory, by Joe Duffy, January 3rd, 2010. Although this is a blog post, don't expect to read it all on your lunch break... The STM.NET incubator project was canceled May 11, 2010, after beginning public life July 27, 2009 at DevLabs. In this blog post, written 4 months prior to its cancellation, Joe Duffy discusses the practical engineering challenges around implementing Software Transactional Memory in .NET. Note: He starts off with a disclaimer that he was not engaged in the STM.NET project past its initial working group phase. In short, Joe argues, "Throughout, it became abundantly clear that TM, much like generics, was a systemic and platform-wide technology shift. It didn’t require type theory, but the road ahead sure wasn’t going to be easy." The whole blog post deals with how many implementation challenges platform-wide support for STM would be in .NET, including what options were considered. He does not mention Maurice Herlihy's SXM library approach, but refers to Tim Harris's work several times. There was plenty here that surprised me, especially when you compare Concurrent Haskell's STM implementation to STM.NET design decisions and interesting debates the team had. In Concurrent Haskell, issues Joe raises, like making Console.WriteLine transactional, are delegated to the type system by the very nature of the TVar monad, preventing programmers from writing such wishywashy code. To be honest, this is why I didn't understand what Joe meant by "it didn't require type theory" gambit, since some of the design concerns are mediated in Concurrent Haskell via type theory. On the other hand, based on the pragmatics Joe discusses, and the platform-wide integration with the CLR they were shooting for, reminds me of The Transactional Memory / Garbage Collection Analogy. Joe also wrote a briefer follow-up post, More thoughts on transactional memory, where he talks more about Barbara Liskov's Argus. By Z-Bo at 2010-09-07 17:05 | Critiques | Implementation | Parallel/Distributed | Software Engineering | 7 comments | other blogs | 12118 reads
Sapir-Whorf 70 years onMany a people have looked at Programming Lanugages through the Sapir-Whorf lens so it's not uncommon to find people making PL claims using that hypothesis. Also not surprisingly, the topic keeps re-appearing here on LtU. This week's NY Times magazine has an article titled Does Your Language Shape How You Think? by Guy Deutscher which starts as a retrospective on Whorf but then goes into what new research has shown.
Bart De Smet on .NET 4's System.Interactive libraryMicrosoft employee Bart De Smet, who has a widely trafficked blog, has been writing a lot in the past few months about a new library being designed by his group at Microsoft. Here is a whole truckload of blogpost links, in chronological order, which appears to be how Bart intended folks to read it: Dec 26, 2009: More LINQ with System.Interactive – The Ultimate Imperative I don't usually read blogs, but I thought this was a pretty cogent series of posts. Also, judging by how interested LtU and the surrounding blogosphere community was from Erik Meijer's presentation on the Rx framework at the JVM Language Summit 2009, I figured people would like this as well. The Recruitment Theory of Language OriginsLeo Meyerovich recently started a thread on LtU asking about Historical or sociological studies of programming language evolution?. I've been meaning to post a paper on this topic to LtU for awhile now, but simply cherrypicking for the opportune time to fit it into forum discussion. With Leo's question at hand, I give you an interesting paper that models language evolution, by artificial intelligence researcher Luc Steels. Steels has spent over 10 years researching this area, and his recent paper, The Recruitment Theory of Language Origins, summarizes one of his models for dealing with language evolution:
Why Normalization Failed to Become the Ultimate Guide for Database Designers?While trying to find marshall's claim that Alberto Mendelzon says the universal relation is an idea re-invented once every 3 years (and later finding a quote by Jeffrey Ullman that the universal relation is re-invented 3 times a year), I stumbled across a very provocative rant by a researcher/practitioner: Why Normalization Failed to Become the Ultimate Guide for Database Designers? by Martin Fotache. It shares an interesting wealth of experience and knowledge about logical design. The author is obviously well-read and unlike usual debates I've seen about this topic, presents the argument thoroughly and comprehensively. The abstract is:
The body of the paper presents an explanation for why practitioners have rejected normalization. The author also shares his opinion on potentially underexplored ideas as well, drawing from an obviously well-researched depth of knowledge. In recent years, some researchers, such as Microsoft's Pat Helland, have even said Normalization is for sissies (only to further this with later formal publications such as advocating we should be Building on Quicksand). Yet, the PLT community is pushing for the exact opposite. Language theory is firmly rooted in formal grammars and proven correct 'tricks' for manipulating and using those formal grammars; it does no good to define a language if it does not have mathematical properties ensuring relaibility and repeatability of results. This represents and defines real tension between systems theory and PLT. I realize this paper focuses on methodologies for creating model primitives, comparing mathematical frameworks to frameworks guided by intuition and then mapped to mathematical notions (relations in the relational model), and some may not see it as PLT. Others, such as Date, closely relate understanding of primitives to PLT: Date claims the SQL language is to blame and have gone to the lengths of creating a teaching language, Tutorial D, to teach relational theory. In my experience, nothing seems to effect lines of code in an enterprise system more than schema design, both in the data layer and logic layer, and often an inverse relationship exists between the two; hence the use of object-relational mapping layers to consolidate inevitable problems where there will be The Many Forms of a Single Fact (Kent, 1988). Mapping stabilizes the problem domain by labeling correspondances between all the possible unique structures. I refer to this among friends and coworkers as the N+1 Schema Problem, as there is generally 1 schema thought to be canonical, either extensionally or intensionally, and N other versions of that schema. Question: Should interactive programming languages aid practitioners in reasoning about their bad data models, (hand waving) perhaps by modeling each unique structure and explaining how they relate? I could see several reasons why that would be a bad idea, but as the above paper suggests, math is not always the best indicator of what practitioners will adopt. It many ways this seems to be the spirit of the idea behind such work as Stephen Kell's interest in approaching modularity by supporting evolutionary compatibility between APIs (source texts) and ABIs (binaries), as covered in his Onward! paper, The Mythical Matched Modules: Overcoming the Tyranny of Inflexible Software Construction. Similar ideas have been in middleware systems for years and are known as wrapper architecures (e.g., Don’t Scrap It, Wrap It!), but haven't seen much PLT interest that I'm aware of; "middleware" might as well be a synonym for Kell's "integration domains" concept. EASTL -- Electronic Arts Standard Template LibraryThe gaming studio Electronic Arts maintains their own version of the Standard Template Library. Despite the fact this is old news, I checked the LtU Archives and the new site, and there is no mention of EASTL anywhere. There are quite a few good blog posts about EASTL on the Internet, as well as the the following paper, EASTL -- Electronic Arts Standard Template Library by Paul Pedriana:
This paper is a good introduction to a unique set of requirements video game development studios face, and compliments Manuel Simoni's recent story about The AI Systems of Left 4 Dead. This paper could be a useful inroad to those seeking to apply newer object-functional programming languages and ideas to game development. By Z-Bo at 2009-12-22 15:05 | Critiques | Implementation | Meta-Programming | 1 comment | other blogs | 17498 reads
Back to the Future: Lisp as a Base for a Statistical Computing SystemBack to the Future: Lisp as a Base for a Statistical Computing System by Ross Ihaka and Duncan Temple Lang, and the accompanying slides. This paper was previously discussed on comp.lang.lisp, but apparently not covered on LtU before.
Foot note: Duncan Temple Lang is a core developer of R and has worked on the core engine for TIBCO's S-PLUS. Thanks to LtU user bashyal for providing the links. |
Browse archives
Active forum topics |
Recent comments
25 weeks 6 days ago
25 weeks 6 days ago
25 weeks 6 days ago
48 weeks 13 hours ago
1 year 1 day ago
1 year 1 week ago
1 year 1 week ago
1 year 4 weeks ago
1 year 9 weeks ago
1 year 9 weeks ago