User loginNavigation |
archivesTaking Off the Gloves with Reference Counting ImmixTaking Off the Gloves with Reference Counting Immix, by Rifat Shahriyar, Stephen M. Blackburn, and Kathryn S. McKinley:
A new reference counting GC based on the Immix heap layout, which purports to close the remaining performance gap with tracing collectors. It builds on last year's work, Down for the count? Getting reference counting back in the ring, which describes various optimizations to raw reference counting that make it competitive with basic tracing. There's a remaining 10% performance gap with generational tracing that RCImmix closes by using the Immix heap layout with bump pointer allocation (as opposed to free lists typically used in RC). The improved cache locality of allocation makes RCImmix even faster than the generational tracing Immix collector. However, the bump pointer allocation reduces the incrementality of reference counting and would impact latency. One glaring omission of this paper is the absence of latency/pause time measurements, which is typical of reference counting papers since ref counting is inherently incremental. Since RCImmix trades off some incrementality for throughput by using bump pointer allocation and copy collection, I'm curious how this impacts the pause times. Reference counting has been discussed a few times here before, and some papers on past ref-counting GC's have been posted in comments, but this seems to be the first top-level post on competitive reference counting GC. An "adaptive" LALR(1) parser I've been toying withHello LtUers, I've been toying with this little project during my daily commute (only today learning that it's an area that's been explored off and on since the 60s under the name "adaptive parsers" -- I'll have to update the readme): https://github.com/kthielen/ww Some things that I think are interesting about this experiment are: * lexing/parsing are handled by the same underlying LALR(1) parser-generator algorithm The basic idea here is that you feed this thing some basic axioms (your primitive functions, which can be used in subsequent lexer/parser reductions), then you can feed it input to parse and/or extend the syntax of what can be parsed. I have been thinking about using this to allow syntax extension in modules supported by a compiler I've been working on. A simple (working) example from the project page:
// now that we can write comments -- extend this grammar to accept arithmetic expressions assocl t before p. assocl e before t. lexeme int = x:'[0-9]+' -> toInt x. // with that out of the way, let us now do a little arithmetic FWIW, I'm not working in an academic setting, just your average compiler project in a large non-technology company. I'm curious to know if others have thoughts/warnings about this approach, or maybe interesting recent research/discussions I should read (I did turn up this old thread from several years back). By Kalani at 2013-10-09 19:35 | LtU Forum | login or register to post comments | other blogs | 3872 reads
|
Browse archivesActive forum topics |
Recent comments
22 weeks 3 days ago
22 weeks 4 days ago
22 weeks 4 days ago
44 weeks 5 days ago
49 weeks 5 hours ago
50 weeks 4 days ago
50 weeks 4 days ago
1 year 1 week ago
1 year 5 weeks ago
1 year 5 weeks ago