User loginNavigation |
John C. Reynolds Doctoral Dissertation Award (SIGPLAN)
I know of some outstanding dissertations. I am sure you do to. So why not nominate them for this honor (for further details see here)? By Ehud Lamm at 2013-11-05 07:03 | General | login or register to post comments | other blogs | 8273 reads
Taking Off the Gloves with Reference Counting ImmixTaking Off the Gloves with Reference Counting Immix, by Rifat Shahriyar, Stephen M. Blackburn, and Kathryn S. McKinley:
A new reference counting GC based on the Immix heap layout, which purports to close the remaining performance gap with tracing collectors. It builds on last year's work, Down for the count? Getting reference counting back in the ring, which describes various optimizations to raw reference counting that make it competitive with basic tracing. There's a remaining 10% performance gap with generational tracing that RCImmix closes by using the Immix heap layout with bump pointer allocation (as opposed to free lists typically used in RC). The improved cache locality of allocation makes RCImmix even faster than the generational tracing Immix collector. However, the bump pointer allocation reduces the incrementality of reference counting and would impact latency. One glaring omission of this paper is the absence of latency/pause time measurements, which is typical of reference counting papers since ref counting is inherently incremental. Since RCImmix trades off some incrementality for throughput by using bump pointer allocation and copy collection, I'm curious how this impacts the pause times. Reference counting has been discussed a few times here before, and some papers on past ref-counting GC's have been posted in comments, but this seems to be the first top-level post on competitive reference counting GC. LVars: monotonic update for deterministic parallel programmingLVars are one outcome of Lindsey Kuper's ongoing PhD work at Indiana University. They generalize existing models for deterministic parallelism by considering a general framework of monotonic read and write operations. They were briefly mentioned on LtU before (along with the strongly related work on Bloom in the distributed systems community), and were recently presented in two distinct and complementary articles. The first article describes the basic building blocks and ideas of LVars:
The second relaxes the original model by introducing failure, which widens its applicability:
Something I personally found surprising and impressive about LVars is that, while I was initially interested in the very formal aspects of providing a theoretical framework for deterministic concurrency, it very quickly produced a practical library that people can use to write parallel program -- and competitive with existing high-performance approaches. As described in a recent blog post, a Haskell library is available on Hackage -- but surely LVars-inspired libraries could make sense in a lot of other languages as well. Copatterns: the final approach to codata?Andreas Abel and Brigitte Pientka's Well-Founded Recursion with Copatterns; a Unified Approach to Termination and Productivity is one of my highlights of the just-finished ICFP 2013, but it makes sense to focus on the first paper on this work, published at POPL back in January.
Codata has been often discussed here and elsewhere. See notably the discussion on Turner's Total Functional Programming (historical note: this 2004 beautification of the original 1995 paper which had much of the same ideas), and on the category-theory-inspired Charity language. Given those precedents, it would be easy for the quick reader to "meh" on the novelty of putting "observation" first (elimination rather than introduction rules) when talking about codata; yet the above paper is the first concrete, usable presentation of an observation in a practical setting that feels right, and it solves long-standing problem that current dependently-typed languages (Coq and Agda) have. Coinduction has an even more prominent role, due to its massive use to define program equivalence in concurrent process calculi; the relevant LtU discussion being about Davide Sangiorgi's On the origins of Bisimulation, Coinduction, and Fixed Points. The POPL'13 paper doesn't really tell us how coinduction should be seen with copatterns. It does not adress the question of termination, which is the topic of the more recent ICFP'13 paper, but I would say that the answer on that point feels less definitive. xkcd: FunctionalXKCD comic on functional programming. Overlay money quote:
The Size-Change Termination Principle for Constructor Based LanguagesThe Size-Change Termination Principle for Constructor Based Languages, by Pierre Hyvernat:
Looks like a relatively straightforward and complete description of a termination checker based on a notion of 'sized types' limited to first-order programs. LtU has covered this topic before, although this new paper doesn't seem to reference that particular Abel work. By naasking at 2013-09-19 01:59 | Functional | Theory | login or register to post comments | other blogs | 13984 reads
Types for Flexible ObjectsTypes for Flexible Objects, by Pottayil Harisanker Menon, Zachary Palmer, Alexander Rozenshteyn, Scott Smith:
An interesting paper I stumbled across quite by accident, it purports quite an ambitious set of features: generalizing previous work on first-class cases while supporting subtyping, mutation, and polymorphism all with full type inference, in an effort to match the flexibility of dynamically typed languages. It does so by introducing a host of new concepts that are almost-but-not-quite generalizations of existing concepts, like "onions" which are kind of a type-indexed extensible record, and "scapes" which are sort of a generalization of pattern matching cases. Instead of approaching objects via a record calculus, they approach it using its dual as variant matching. Matching functions then have degenerate dependent types, which I first saw in the paper Type Inference for First-Class Messages with Match-Functions. Interesting aside, Scott Smith was a coauthor on this last paper too, but it isn't referenced in the "flexible objects" paper, despite the fact that "scapes" are "match-functions". Overall, quite a dense and ambitous paper, but the resulting TinyBang language looks very promising and quite expressive. Future work includes making the system more modular, as it currently requires whole program compilation, and adding first-class labels, which in past work has led to interesting results as well. Most work exploiting row polymorphism is particularly interesting because it supports efficient compilation to index-passing code for both records and variants. It's not clear if onions and scapes are also amenable to this sort of translation. Edit: a previous paper was published in 2012, A Practical, Typed Variant Object Model -- Or, How to Stand On Your Head and Enjoy the View. BigBang is their language that provides syntactic sugar on top of TinyBang. Edit 2: commas fixed, thanks! By naasking at 2013-09-04 16:57 | Meta-Programming | Object-Functional | Theory | Type Theory | 42 comments | other blogs | 26853 reads
Dynamic Region InferenceDynamic Region Inference, by David Pereira and John Aycock:
A novel garbage collector that solves reference counting's cycle problems by introducing "regions", which demarcate possibly cyclic subgraphs. These regions are updated by merge and split operations that occur on pointer update and incrementally on region allocation, respectively, ie. adding a pointer to B into aggregate C merges their regions, and trying to allocate a new region first attempts to split some random candidate region by computing the local reference counts via union-find of the region's members. Obviously dynamic regions don't share contiguous storage like their static counterparts, so "regions" here are purely a logical concept to ensure completeness of reference counting. The implementation adds two words to each object, one for pointing to the object's current region, the other for a "next" pointer for the next object in the region. The practicality of this approach isn't clear compared to other cycle detection algorithms, and no benchmarks are provided. I haven't found any follow-up work either. Dependently-Typed Metaprogramming (in Agda)Conor McBride gave an 8-lecture summer course on Dependently typed metaprogramming (in Agda) at the Cambridge University Computer Laboratory:
The lecture notes, code, and video captures are available online. As with his previous course, the notes contain many(!) mind expanding exploratory exercises, some of which quite challenging. By Ohad Kammar at 2013-08-30 07:34 | Category Theory | Functional | Lambda Calculus | Meta-Programming | Paradigms | Semantics | Teaching & Learning | Theory | Type Theory | 5 comments | other blogs | 18377 reads
Parsing people, unite! Call for position papers for Parsing@SLE (SPLASH, Indianapolis)Parsing@SLE is a new workshop on parsing programming languages and other software languages. The intended participants are the authors of parser generation tools and parsers for programming languages and other software languages. For the purpose of this workshop "parsing" is a computation that takes a sequence of characters as input and produces a tree or graph shaped model as output. This possibly includes tokenization using regular expressions, deriving trees using context-free grammars, mapping to abstract syntax trees and perhaps even some semantic analysis. The goal of the workshop is to bring together today's experts in the field of parsing, in order to explore open questions and possibly forge new collaborations. The topics may include algorithms, implementation and generation techniques, syntax and semantics of meta formalisms (BNF), etc. We expect to attract participants that have been or are developing theory, techniques and tools in the broad area of parsing non-natural languages such as programming languages and other software languages (domain specific languages, configuration languages, build languages, data description languages, query languages, etc.) We solicit short abstracts, asking for positions, demonstrations and early achievements. The submissions will be reviewed on relevance and clarity, and used to plan the mostly interactive sessions of the day. * workshop website |
Browse archives
Active forum topics |
Recent comments
23 weeks 3 days ago
23 weeks 3 days ago
23 weeks 3 days ago
45 weeks 4 days ago
49 weeks 6 days ago
51 weeks 3 days ago
51 weeks 3 days ago
1 year 2 weeks ago
1 year 6 weeks ago
1 year 6 weeks ago