Functional
by Derek Dreyer, Robert Harper, and Manuel M. T. Chakravarty
ML modules and Haskell type classes have proven to be highly effective tools for program structuring. Modules emphasize explicit configuration of program components and the use of data abstraction. Type classes emphasize implicit program construction and ad hoc polymorphism. In this paper, we show how the implicitly-typed style of type class programming may be supported within the framework of an explicitly-typed module language by viewing type classes as a particular mode of use of modules. This view offers a harmonious integration of modules and type classes, where type class features, such as class hierarchies and associated types, arise naturally as uses of existing module-language constructs, such as module hierarchies and type components. In addition, programmers have explicit control over which type class instances are available for use by type inference in a given scope. We formalize our approach as a Harper-Stone-style elaboration relation, and provide a sound type inference algorithm as a guide to implementation.
(via Windley)
This seems like a nice writeup. The article even manages to introduce monads (without explicitly going into all the details), and provides the correct perspective about them (i.e., they are not just for IO). Also featured: Parsec and a bit irony (see footnote #2).
EDUCATIONAL PEARL: ‘Proof-directed debugging’ revisited for a first-order version. Kwangkeun Yi. JFP. 16(6):663-670.
Some 10 years ago, Harper illustrated the powerful method of proof-directed debugging for developing programs with an article in this journal. Unfortunately, his example uses both higher-order functions and continuation-passing style, which is too difficult for students in an introductory programming course. In this pearl, we present a first-order version of Harper's example and demonstrate that it is easy to transform the final version into an efficient state machine. Our new version convinces students that the approach is useful, even essential, in developing both correct and efficient programs.
The problem is regular expression matching: checking whether a string S belongs to the language specified by the regular expression r.
We'll be peeling away that disguise and showing how you can integrate programming and proof in a single system, if you happen to have a functional language with an expressive type system handy...
Have you guessed? It's an Epigram course module from the University of Nottingham.
What you will find in the linked page is a set of exercises which consist of downloadable Epigram files for your enjoyment.
Via the PLT Scheme mailing list:
MzScheme's support for prompts and composable continuations most
closely resembles Dorai Sitaram's tagged `%' and `fcontrol' operators;
see "Handling Control", PLDI'93 [previously mentioned at LtU], or see Dorai's dissertation.
You can read about the primitive functionality of delimited continuations in the language manual. You can read about all the different control operators available in the new control.ss library in library manual. Among them: % , fcontrol , prompt , control , prompt-at , control-at , reset , shift , reset-at , shift-at , prompt0 , reset0 , control0 , shift0 , prompt0-at , reset0-at , control0-at , shift0-at , set , cupto .
(To prevent any confusion: this change is in the latest SVN but not yet a final release.)
By Chris Heunen and Bart Jacobs
At ï¬rst, these Arrows may look somewhat arbitrary. Here we show that they are categorically fairly civilised, by showing that they correspond to monoids in suitable subcategories of bifunctors Cop × C → C. This shows that, at a suitable level of abstraction, arrows are like monads …
by Neil Ghani and Patricia Johann:
Initial algebra semantics is one of the cornerstones of the theory of modern programming languages. It provides support for fold combinators encapsulating structured recursion over data structures, thereby making it possible to both reason about and transform programs in principled ways. Recently, Ghani, Uustalu, and Vene extended the usual initial algebra semantics to support not only standard fold combinators for inductive types, but also Church encodings and build combinators for them. In addition to being theoretically useful in ensuring that build is seen as a fundamental part of the basic infrastructure for programming with inductive types, this development has practical merit: the fold and build combinators can be used to deï¬ne fold/build rules which optimise modularly constructed programs by eliminating intermediate inductive data structures.
In this paper, we apply this extended initial algebra semantics to the theory and practice of programming with nested data types. In particular, we use it to simplify the theory of folds for nested types, as well as to provide the ï¬rst Church encodings, build combinators, and fold/build fusion rules for them.
Code for "Folds, Church Encodings, Builds, and Short Cut Fusion for Nested Types: A Principled Approach"
Gradual Typing for Functional Languages
Static and dynamic type systems have well-known strengths and weaknesses, and each is better suited for different programming tasks. There have been many efforts to integrate static and dynamic typing and thereby combine the benefits of both typing disciplines in the same language. The flexibility of static typing can be improved by adding a type Dynamic and a typecase form. The safety and performance of dynamic typing can be improved by adding optional type annotations or by performing type inference (as in soft typing). However, there has been little formal work on type systems that allow a programmer-controlled migration between dynamic and static typing. Thatte proposed Quasi-Static Typing, but it does not statically catch all type errors in completely annotated programs. Anderson and Drossopoulou defined a nominal type system for an object-oriented language with optional type annotations. However, developing a sound, gradual type system for functional languages with structural types is an open problem.
In this paper we present a solution based on the intuition that the structure of a type may be partially known/unknown at compile-time and the job of the type system is to catch incompatibilities between the known parts of types. We define the static and dynamic semantics of a λ-calculus with optional type annotations and we prove that its type system is sound with respect to the simply-typed λ-calculus for fully-annotated terms. We prove that this calculus is type safe and that the cost of dynamism is “pay-as-you-goâ€.
In other news, the Holy Grail has been found. Film at 11.
This piece of genius is the combined work of Jeremy Siek, of Boost fame, and Walid Taha, of MetaOCaml fame. The formalization of their work in Isabelle/Isar can be found here.
I found this while tracking down the relocated Concoqtion paper. In that process, I also found Jeremy Siek's other new papers, including his "Semantic Analysis of C++ Templates" and "Concepts: Linguistic Support for Generic Programming in C++." Just visit Siek's home page and read all of his new papers, each of which is worth a story here.
The Kernel Programming Language, by John N. Shutt:
Kernel is a conservative, Scheme-like dialect of Lisp in which everything is a first-class object.
"But," you may ask, "aren't all objects first-class in Scheme?" (I'm glad you asked.) No, they aren't. Special-form combiners are second-class objects. To borrow a phrase from the original description of first- and second-class objects by Christopher Strachey, they have to appear in person under their own names. (There are also several other kinds of second-class objects in Scheme, but special-form combiners are the most commonplace.)
The idea of first-class operative combiners, i.e., first-class combiners whose operands are never evaluated, has been around a long time. Such creatures were supported by mainstream Lisps through the 1970s, notably under the name FEXPRs, but they made a mess out of the language semantics because they were non-orthogonal to the ordinary variety of procedures constructed via lambda.
Kernel eliminates the non-orthogonality problem by breaking the classical lambda constructor into two orthogonal parts, one of which is the Kernel constructor for first-class operatives.
Via Shriram Krishnamurthi on c.l.scheme. The story title is from an older paper on Kernel.
Google's MapReduce Programming Model -- Revisited. Ralf Lämmel.
MapReduce is a programming model (and an associated implementation) used at Google for processing large amounts of input data (such as millions of documents) and generating keyed or indexed query results (such as the indexes used for Google's web search). The programming model is stunningly simple, and it allows the programmer to largely abstract from the parallel and distributed execution of the data-processing computations. Parallel execution is promoted by the assumed skeleton of MapReduce computations. Load balancing, network performance and fault tolerance are taken care of by the MapReduce implementation.
We revisit the MapReduce programming model in an attempt to provide a rigorous description of the model. We focus on the key abstraction for MapReduce computations; this abstraction is parameterized by the problem-specific ingredients for data extraction and reduction. We use Haskell as a lightweight specification language to capture the essence of MapReduce computations in a succinct, executable and strongly typed manner. Our study substantiates that clarity, generality and correctness of designs (or the presentations thereof) are easily improved, if modest functional programming skills are put to work.
This is a fun example of type-directed exploration and prototyping, and pokes some gentle fun of Google's (ab)use of standard terminology. It's neat to get a glimpse into the programming process of a very experienced Haskell programmer.
(From the forums.)
|
Recent comments
23 weeks 2 days ago
23 weeks 2 days ago
23 weeks 2 days ago
45 weeks 4 days ago
49 weeks 6 days ago
51 weeks 3 days ago
51 weeks 3 days ago
1 year 2 weeks ago
1 year 6 weeks ago
1 year 6 weeks ago