List of ICFP2013 papers with preprints

A "List of ICFP'13 accepted papers, with links to preprint or additional information when available" is being published by user 'gasche' at github: https://github.com/gasche/icfp2013-papers. Any other links, anyone?

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Q/A from ICFP, and POPL links (in progress)

Edward Yang made a fantastic effort of recording the question/answer session during most ICFP talks last week in Boston. Here is the link I don't think the slides are available for most of them, so you may lack some context, but that could still be interesting as complimentary material -- and it is also very useful for the speaker themselves.

The list of papers accepted to POPL'14 is available since a couple of days, but the list of preprint links is still in progress -- pull requests are of course welcome.

Interestingly (?), my subjective perception is that most ICFP authors had a preprint available on the web when the "accepted papers" list became available, whereas this is much less the case for POPL authors. Is the purely "functional programming" crowd better at open research?

The uptake of arXiv is still disappointingly low in both cases. We need to put more stuff (especially the long versions with proofs, with the implementation attached) on arxiv. A preprint list is useful before the conference, but then the links will be dead in a few months/years -- arxiv (or HAL, etc.) is there for long-term archival of our research production.

arXiv sucks because they

arXiv sucks because they want to build your latex from scratch and don't take PNG based figures, at least last time I tried. I tried to get my paper on it but gave up after an hour.

Popl has 50 papers! POPL used to be the least noisy PL conference.

Maybe we should have a conference in SIGPLAN comparable to OSDI.

Help them

I think requiring the original source, and not a compiled format, is the right design decision for long-term archival of content. It will not be perfect (I'm sure some people use a frontend to produce LaTeX or attached content), but going in this direction is a good thing. For example, arxiv actually has the door open to producing reflowable documents fit for viewing on smaller devices (than A4 or US letter pdfs), and can adapt in time to new consumption formats as they become popular.

You should consider making a request for support of PNG figures, instead of rejecting the tool as a whole. In the meantime, you could also use work-arounds (eg. embedding the png into a PDF to include directly may work, despite being a non-optimal solution).

They do support png now,

They do support png now, perhaps back then where I just couldn't figure out how to upload the 20 or 30 files that I typically build my papers from. To add insult to injury, those who use Word get a pass and can submit just a PDF, much easier! Unfortunately, I don't want to write my papers in word.

They have sacrificed usability for uber latex flexibility that no one really cares about, and they never actually enable any cool capability because surprise surprise, latex is not as flexible as the writing on the package claims.

Usability matters. Flexibility is a geek illusion.

Do submit there in any case

Fair point, but arxiv is already useful today and become a de-facto standard in some scientific communities not too far from our (theoretical maths and physics at least), so there is much to gain by importing it in our research practice.

If you are interested enough in the usability aspect to invest more time on it, you may want to get in touch with the arxiv administrators to discuss such policies with them, and maybe lend a hand. It's not my personal cup of tea.

A couple of years ago, POPL

A couple of years ago, POPL moved to a two-track format. A lot of people were unhappy with accept rates in the ~15% range for a variety of reasons. You can read the discussion which led to the change on the TYPES list here.

I was on the PC this year, and was very pleasantly surprised by the quality of the submissions. Not many people send junk to POPL -- even the rejected papers usually had some good features to them.

I'm quite sure they do,

I'm quite sure they do have the quality, actually. But consider that the average POPL paper takes 2-3 times longer to read, or even to grock whether it should be read, than a standard PLDI paper. That is a lot of overload on just going through what papers are accepted. There is also the question of how bit the community can be with respect to available funding and opportunities; even as Asia has come online their universities have failed to provide much in the way of opportunities (just more people chasing the same number of positions). In that context, the OSDI model becomes appealing as it keeps the inner ring smaller because it is actually size constrained in both the capacity of the reader to digest the material and number of people to be promoted.

But I think broader fixes should be explored: can the conference system be proactive in not just picking winners, but raise promotion rates for those winners to (a) help make their work more visible (ACM works against this with their insistent paywall) and (b) in doing that, help grow the demand side for the community.

Hu?

I'm not very familiar with reviewing processes or research organization discussions, so I would be interested in more explanations -- I mostly don't understand what you mean. What is the OSDI model you're referring to?

If I understand correctly, the problem you identify with having 50 submissions is that it takes too long a time to know which one you are interested in. But I'm not sure how having less papers at POPL would solve that issue; wouldn't people just submit somewhere else, and you have just as much work tracking interesting submissions?

Accepting fewer papers puts

Accepting fewer papers puts the spotlight on fewer papers, correpondingly providing them with more publicity. Going from 25 to 50 papers potentially halves the acceptance boost, perhaps even more. Keep in mind that this boost only comes from (a) conference attendees who saw your talk or (b) someone leafing through the list of accepted papers. The ACM DL is pointless, and I haven't seen any conference that devoted resources to promoting the papers they accepted (that is the author's job).

Also the second/third/or just "more niche" tier PL conferences have already become decimated by popular first tiers that are accepting so many papers these days. It used to be ok to submit to a conference that wasn't POPL PLDI OOPSLA ECOOP ICFP, now...maybe not. This makes it very difficult to create new branches in the community through new innovative conferences.

Finally, for the same reason, the first tier conferences are seriously lacking in focus...taking in content that really would be better off in a more niche venue. I was just checking out the ECOOP call, it's anything but...objects!

What is "open research"?

It's very important for papers to be available long-term, and for free, as you say. But I don't understand why wanting to release only a final version is less "open" in a meaningful way. We don't demand this of book authors, or musicians, or anyone else.

What is open

Research is a process, not a product. Thus, 'open research' must be an open process.

What does it mean for a process to be open? To me, it suggests effective support for collaboration by multiple contributors. And not just a 'closed' (centrally maintained) set of contributors (like a 'team' or a small business) but rather an 'open' set - i.e. a community, with ability to discover, communicate, contribute.

The processes of authoring and music composition are often closed. But there has been exploration of technologies for open authoring processes - e.g. wikis, telepresence, blackboard metaphors. Even if you don't see value in such open processes, they certainly seem more open in a meaningful way. Also, there may be value you are not seeing.

a draft is not the end

I am not of the opinion that it is morally bad to release (to everyone and for free) only a final version rather than a draft. Besides any awkwardness in my expression, I understand I may have given this impression for two distinct reasons:

- I have the impression that people that never release a preprint in an accessible way also never release a final version; a few researchers are happy to just let, say, the ACM digital library distribute their work, which (with the exception of the recent "author link" initiative and potential OA schemes were authors pay a fee) means non-universal availability of the research. Of course it's a correlation (mostly supposed so far, I don't have actual statistics), not a general truth -- let me know if you are a counterexample, or know people that are.

- There are legal uncertainties around the idea of distributing a final version to the public. It may be the version that is covered by a copyright transfer agreement, for example. Releasing a draft/preprint feels "even safer" because at this point the document has not benefitted from any work provided by the editor at all. I'm fine with giving an URL link to a "final version" that would be put online (in any case, giving a link is not distribution), but I've been careful to rather emphasize the status of "draft" or "preprint" of the documents when it applied -- I'll let authors themselves decide how they want to handle the final version.

Now if you ask me, I think there are a lot of good reasons for publicly releasing (either on your website or on an archival site) a version of your paper at the time of submission (possibly a long version with proofs and without page limits), rather than in an arbitrary future. It lets people know about your work, which is generally a good thing. Maybe more importantly, it lets people that consider attending the conference make a much better idea of your work to decide if they want to come to the talk (and maybe prepare a better question than the 15mn talk allow).

So I'd say having a preprint publicly available is a good thing, but of course there is no obligation to do so. But are there good reasons not to do this? In particular, do some people think that an article is not ready for the (early, motivated and sympathetic) public eyes, but still consider it worthy of using the reviewer's time?

Counterexamples

I sometimes release preprints, and sometimes don't, but every paper I've ever written is available from my web site. Some of my collaborators, such as Matthias Felleisen, are even less likely to release preprints, but also make all their papers public.

On the legal point, I don't think the different versions are really subject to different legal rules, although we often act like they are.

Finally, I think there are two potential reasons to not release a draft publicly (not that I always think these are correct reasons not to release a preprint). First, that you'd like your paper to be associated for most people with its best version. Second, that reviewers have certain obligations that the general public doesn't have (not to steal your ideas before they're published, for example).

Second, that reviewers have

Second, that reviewers have certain obligations that the general public doesn't have (not to steal your ideas before they're published, for example).

Dissemination of research means "please steal my ideas (and cite me);" why would I want that to happen later rather than sooner?

Point taken

That's reasonable.

Interestingly (?), my

Interestingly (?), my subjective perception is that most ICFP authors had a preprint available on the web when the "accepted papers" list became available, whereas this is much less the case for POPL authors.

For ICFP, I think there was about a week between the time acceptance/rejection emails went out and the time the accepted papers list was posted. For POPL, those things happened the same day. So, with ICFP, the people with accepted papers had a little more time to clean up their preprint and and post it, whereas with POPL, they might have been caught unawares.