archives

Taking Off the Gloves with Reference Counting Immix

Taking Off the Gloves with Reference Counting Immix, by Rifat Shahriyar, Stephen M. Blackburn, and Kathryn S. McKinley:

Despite some clear advantages and recent advances, reference counting remains a poor cousin to high-performance tracing garbage collectors. The advantages of reference counting include a) immediacy of reclamation, b) incrementality, and c) local scope of its operations. After decades of languishing with hopelessly bad performance, recent work narrowed the gap between reference counting and the fastest tracing collectors to within 10%. Though a major advance, this gap remains a substantial barrier to adoption in performance-conscious application domains. Our work identifies heap organization as the principal source of the remaining performance gap. We present the design, implementation, and analysis of a new collector, RCImmix, that replaces reference counting’s traditional free-list heap organization with the line and block heap structure introduced by the Immix collector. The key innovations of RCImmix are 1) to combine traditional reference counts with per-line live object counts to identify reusable memory and 2) to eliminate fragmentation by integrating copying with reference counting of new objects and with backup tracing cycle collection. In RCImmix, reference counting offers efficient collection and the line and block heap organization delivers excellent mutator locality and efficient allocation. With these advances, RCImmix closes the 10% performance gap, outperforming a highly tuned production generational collector. By removing the performance barrier, this work transforms reference counting into a serious alternative for meeting high performance objectives for garbage collected languages.

A new reference counting GC based on the Immix heap layout, which purports to close the remaining performance gap with tracing collectors. It builds on last year's work, Down for the count? Getting reference counting back in the ring, which describes various optimizations to raw reference counting that make it competitive with basic tracing. There's a remaining 10% performance gap with generational tracing that RCImmix closes by using the Immix heap layout with bump pointer allocation (as opposed to free lists typically used in RC). The improved cache locality of allocation makes RCImmix even faster than the generational tracing Immix collector.

However, the bump pointer allocation reduces the incrementality of reference counting and would impact latency. One glaring omission of this paper is the absence of latency/pause time measurements, which is typical of reference counting papers since ref counting is inherently incremental. Since RCImmix trades off some incrementality for throughput by using bump pointer allocation and copy collection, I'm curious how this impacts the pause times.

Reference counting has been discussed a few times here before, and some papers on past ref-counting GC's have been posted in comments, but this seems to be the first top-level post on competitive reference counting GC.

An "adaptive" LALR(1) parser I've been toying with

Hello LtUers,

I've been toying with this little project during my daily commute (only today learning that it's an area that's been explored off and on since the 60s under the name "adaptive parsers" -- I'll have to update the readme):

https://github.com/kthielen/ww

Some things that I think are interesting about this experiment are:

* lexing/parsing are handled by the same underlying LALR(1) parser-generator algorithm
* the active lexer/parser are dynamically determined (reading input can affect the grammar being used)
* lexemes and "non-terminals" have associated types (and named binding syntax rather than $0,$1,...)
* lexers/parsers are constructed incrementally, so it's easy to point to the first rule that introduces ambiguity (and "nice" error messages are split out to illustrate where these conflicts come from)

The basic idea here is that you feed this thing some basic axioms (your primitive functions, which can be used in subsequent lexer/parser reductions), then you can feed it input to parse and/or extend the syntax of what can be parsed. I have been thinking about using this to allow syntax extension in modules supported by a compiler I've been working on. A simple (working) example from the project page:


syntax lexeme w = '//[^\n]*\n' -> null.

// now that we can write comments -- extend this grammar to accept arithmetic expressions
syntax {
  assocl p.
  lexeme p = "+".

  assocl t before p.
  lexeme t = "*".

  assocl e before t.
  lexeme e = "^".

  lexeme int = x:'[0-9]+' -> toInt x.
  rule expr = x:int -> x.
  rule expr = x:expr p y:expr -> plus x y.
  rule expr = x:expr t y:expr -> times x y.
  rule expr = x:expr e y:expr -> power x y.
  rule statement = e:expr -> e.
}

// with that out of the way, let us now do a little arithmetic
1 + 3^2*2

FWIW, I'm not working in an academic setting, just your average compiler project in a large non-technology company. I'm curious to know if others have thoughts/warnings about this approach, or maybe interesting recent research/discussions I should read (I did turn up this old thread from several years back).