User loginNavigation |
A Verified Compiler for an Impure Functional LanguageA Verified Compiler for an Impure Functional Language
Further work on/with LambdaTamer for certified compiler development. By Paul Snively at 2010-01-09 17:03 | Functional | Implementation | Lambda Calculus | Semantics | Type Theory | 4 comments | other blogs | 9772 reads
Certified Programming With Dependent Types Goes BetaCertified Programming With Dependent Types From the introduction:
This is the best Coq tutorial that I know of, partially for being comprehensive, and partially for taking a very different tack than most with Adam's emphasis on proof automation using Coq's Ltac tactic language. It provides an invaluable education toward understanding what's going on either in LambdaTamer or Ynot, both of which are important projects in their own rights. Please note that Adam is explicitly requesting feedback on this work. By Paul Snively at 2010-01-09 16:56 | Functional | Lambda Calculus | Logic/Declarative | Misc Books | Semantics | Teaching & Learning | Type Theory | 6 comments | other blogs | 10951 reads
Why Normalization Failed to Become the Ultimate Guide for Database Designers?While trying to find marshall's claim that Alberto Mendelzon says the universal relation is an idea re-invented once every 3 years (and later finding a quote by Jeffrey Ullman that the universal relation is re-invented 3 times a year), I stumbled across a very provocative rant by a researcher/practitioner: Why Normalization Failed to Become the Ultimate Guide for Database Designers? by Martin Fotache. It shares an interesting wealth of experience and knowledge about logical design. The author is obviously well-read and unlike usual debates I've seen about this topic, presents the argument thoroughly and comprehensively. The abstract is:
The body of the paper presents an explanation for why practitioners have rejected normalization. The author also shares his opinion on potentially underexplored ideas as well, drawing from an obviously well-researched depth of knowledge. In recent years, some researchers, such as Microsoft's Pat Helland, have even said Normalization is for sissies (only to further this with later formal publications such as advocating we should be Building on Quicksand). Yet, the PLT community is pushing for the exact opposite. Language theory is firmly rooted in formal grammars and proven correct 'tricks' for manipulating and using those formal grammars; it does no good to define a language if it does not have mathematical properties ensuring relaibility and repeatability of results. This represents and defines real tension between systems theory and PLT. I realize this paper focuses on methodologies for creating model primitives, comparing mathematical frameworks to frameworks guided by intuition and then mapped to mathematical notions (relations in the relational model), and some may not see it as PLT. Others, such as Date, closely relate understanding of primitives to PLT: Date claims the SQL language is to blame and have gone to the lengths of creating a teaching language, Tutorial D, to teach relational theory. In my experience, nothing seems to effect lines of code in an enterprise system more than schema design, both in the data layer and logic layer, and often an inverse relationship exists between the two; hence the use of object-relational mapping layers to consolidate inevitable problems where there will be The Many Forms of a Single Fact (Kent, 1988). Mapping stabilizes the problem domain by labeling correspondances between all the possible unique structures. I refer to this among friends and coworkers as the N+1 Schema Problem, as there is generally 1 schema thought to be canonical, either extensionally or intensionally, and N other versions of that schema. Question: Should interactive programming languages aid practitioners in reasoning about their bad data models, (hand waving) perhaps by modeling each unique structure and explaining how they relate? I could see several reasons why that would be a bad idea, but as the above paper suggests, math is not always the best indicator of what practitioners will adopt. It many ways this seems to be the spirit of the idea behind such work as Stephen Kell's interest in approaching modularity by supporting evolutionary compatibility between APIs (source texts) and ABIs (binaries), as covered in his Onward! paper, The Mythical Matched Modules: Overcoming the Tyranny of Inflexible Software Construction. Similar ideas have been in middleware systems for years and are known as wrapper architecures (e.g., Don’t Scrap It, Wrap It!), but haven't seen much PLT interest that I'm aware of; "middleware" might as well be a synonym for Kell's "integration domains" concept. Simplicial DatabasesSimplicial Databases, David I. Spivak.
This is what happens when you try to take the existence of ORDER BY and COUNT in SQL seriously. :-) If you're puzzled by how a geometric idea like simplexes could show up here, remember that the algebraic view of simplicial sets is as presheaves on the category of finite total orders and order-preserving maps. Every finite sequence gives rise to a total order on its set of positions, and tables have rows and columns as sequences! 50 years of Advanced Programming – an Anniversary Seminar on Algol 60The influential Algol 60 report was created 50 years ago after a meeting in Paris held from 1 to 16 January, 1960. A seminar marking the 50th anniversary of Algol 60 will be held next week:
Date/Time: Thursday 14th January 2010, 2.30 - 5.00 p.m. Venue:The Library, The Science Museum, Exhibition Road, South Kensington, London S.W.7. Contributors: Session Chairmen: John Florentin and David Hartley Physics, Topology, Logic and Computation: A Rosetta StonePhysics, Topology, Logic and Computation: A Rosetta Stone by John C. Baez and Mike Stay, 2009.
I am not sure whether this should be categorized as "Fun" instead of "Theory", given that "We assume no prior knowledge of category theory, proof theory or computer science". At least one pair from the title (logic and computation) should ring some bells... ParaSail, a new language oriented toward parallelism and verificationFor those who enjoy seeing sausages made, as opposed to just eating them, we have started a blog following the design of a new programming language called "ParaSail" for Parallel Specification and Implementation Language (not to be confused with SAIL or MAINSAIL, languages originating in the Stanford AI Lab): http://parasail-programming-language.blogspot.com/ ParaSail doesn't exist yet, but the blog talks about it in what my colleagues call the "software present tense," presuming some day it might. Postings are about once a week, though it varies a lot. As with most blogs, the most recent entry is presented first, but it is probably wise to read the first few entries as they give a bit of the background, before reading the more current entries. ParaSail is intended for high-criticality software. The things that might make ParaSail of interest are its implicit parallelism, its integrated support for annotations (preconditions, postconditions, assertions, constraints, etc.), the fact that all modules are parameterized (i.e. are essentially generic templates), and that there are only two kinds of modules, interfaces and classes (which implement interfaces). A type is produced by instantiating an interface with appropriate parameters. The two kinds of modules are intended to cover the entire spectrum of grouping constructs, from packages/modules at the high end, to simple types at the low end. ParaSail clearly inherits plenty of ideas from other languages, such as Modula-2/3, ML/CAML/OCAML, CLU, Eiffel, C++, Java, Ada, etc. The implicit parallelism and the integrated annotations were some of the reasons to justify starting from scratch, but also there is the pleasure of now and then starting with a clean sheet of paper. In any case, it is inherently a work in progress, but comments and questions are certainly welcomed. -Tucker Taft The year in review, and What's to comeNaturally in this day and age, the path to understanding the past and future goes through twitter. Now here is the challenge, in order to make this more exciting: What we really need is a statistical analysis of the #code2009 and #code2010 streams (and in particular, their differences). The goal is to post code that does this analysis in the most elegant and succinct way; naturally using the langues du jour earns bonus points. Functional Pearl: Implicit Conï¬gurations —or, Type Classes Reflect the Values of TypesFunctional Pearl: Implicit Conï¬gurations —or, Type Classes Reflect the Values of Types, by Oleg Kiselyov and Chung-chieh Shan:
This paper has been mentioned in threads quite a few times, but never had its own story. Of particular interest is the discussion of how to safely support local type class instances where Haskell only supports global instances. Holiday Fun: How Programming Language Fanboys See Each Others’ LanguagesPerhaps I am a bit dense, but I find this only mildly amusing, not ROFL material. Still, it is amusing enough to share at this time of year. Happy holidays! |
Browse archives
Active forum topics |
Recent comments
36 weeks 1 day ago
36 weeks 1 day ago
36 weeks 1 day ago
1 year 6 weeks ago
1 year 10 weeks ago
1 year 12 weeks ago
1 year 12 weeks ago
1 year 14 weeks ago
1 year 19 weeks ago
1 year 19 weeks ago