International Conference on Live Coding
With pleasure we announce the initial call for papers and performances for the first International Conference on Live Coding, hosted by ICSRiM in the School of Music, University of Leeds, UK.
This conference follows a long line of international events on liveness in computer programming; the Changing Grammars live audio programming symposium in Hamburg 2004, the LOSS Livecode festival in Sheffield 2007, the annual Vivo festivals in Mexico City from 2012, the live.code.festival in Karlsruhe, the LIVE workshop at ICSE on live programming, and Dagstuhl Seminar 13382 on Collaboration and Learning through Live Coding in 2013, as well as numerous workshops, concerts, algoraves and conference special sessions. It also follows a series of Live Coding Research Network symposia on diverse topics, and the activities of the TOPLAP community since 2004. We hope that this conference will act as a confluence for all this work, helping establish live coding as an interdisciplinary field, exploring liveness in symbolic abstractions, and understanding the perceptual, creative, productive, philosophical and cultural consequences.
The proceedings will be published with ISSN, and there will also be an follow-on opportunity to contribute to a special issue of the Journal on Performance Arts and Digital Media; details will be announced soon.
* Templates available and submissions system open: 8th December 2014
* Long papers (6-12 pages)
ICLC is an interdisciplinary conference, so a wide range of approaches are encouraged and we recognise that the appropriate length of a paper may vary considerably depending on the approach. However, all submissions must propose an original contribution to Live Coding research, cite relevant previous work, and apply appropriate research methods.
The following long list of topics, contributed by early career researchers in the field, are indicative of the breadth of research we wish to include:
* Live coding and the body; tangibility, gesture, embodiment
Submissions which work beyond the above are encouraged, but all should have live coding research or practice at their core. Please contact us if you have any questions about remit.
Please email feedback and/or questions to email@example.com
I want generics, in particular polymorphic functions as well as function overloading based on type-classes. For the moment think about Haskell type-classes and C++ Concepts. I also want code completion. Code completion works for OO languages because they are polymorphic on the object type, which prefixes the function call, and hence the type is known when code completion lookup occurs. Generic functions can be polymorphic on any argument. This suggests that all the arguments need to go before the operator in postfix format. This way code completion can operate on the argument types, and only offer valid generic functions. It can also offer all generic functions that take those arguments in alphabetic order. I want to discuss whether this idea has any merit, and second I want to discuss syntax:
So, a reverse lisp style (likely to confuse people as no clear difference from lisp function first format):
(list comparator sort)
or perhaps reverse C:
(list, comparator) sort
Or perhaps a template C++ like syntax:
I think this paper is a really nice blast from the past.
Too bad it doesn't seem to be available online anywhere. Might any LtU user have the social network to find it / obtain it / write an eulogy about it?
I enjoyed discussion in Why do we need modules at all? (about Joe Armstrong's post on the same topic) more than anything else on LtU the last few years. Several comments reminded me of something I've been thinking about several years now. But describing it doesn't seem on topic in reply (or, I lack the objectivity to tell). The modules thread is already long, and I don't want to pollute it with junk.
This is mainly about transpiling, where you compile one source to generate another, with a goal of automated rewrite, where both syntax and complexity in the input and output are similar: compiling one language to itself, but re-organized. For example, you can use continuation passing style (CPS) to create fibers (lightweight threads) in a language without them natively, like C. Obviously this is language agnostic. You can add cooperative fibers to any language this way, from assembler to Lisp to Python, as long as you call only code similarly transformed. You're welcome to free associate about this or anything else mentioned, though. For the last year or so, almost all my thinking on the topic has been about environment structure, lightweight process management, and scripts effecting loose coupling relationships via mechanisms similar to Unix, such as piping streams, etc. Once you work out how a core kernel mechanism works, all the interesting parts are in the surrounding context, which grow in weirdly organic ways as you address sub-problems in ad hoc ways. Thematically, it's like a melange of crap sintered together, as opposed to being unified, or even vaguely determined by constraints stronger than personal taste.
Note I'm not working on this, thus far anyway, so treat anything I say as an armchair design. There's no code under way. But I probably put a couple thousand hours thinking into it the last seven years, and parts may have a realistic tone to them, in so far as they'd work. It started out as a train of thought beginning with "how do I make C behave like Erlang?" and grew from there. I've been trying to talk myself into wanting to code parts of it, sometime.
Several folks mentioned ideas in the other thread I want to talk about below, in the context of details in the armchair design problem. (I'll try to keep names of components to a minimum, but it gets confusing to talk about things with no names.) Most of the issues I want to touch deal with names, dependencies, modules, visibility, communication, and the like, in concurrent code where you want to keep insane levels of complexity in control if possible.
Impact of static type systems on productivity of actual programmers: first experiment I've seen documented.
It is disappointing that no particular results positive or negative were observed. But it is gratifying, and long overdue, that someone finally thought the question was important enough to perform an actual experiment to answer it.
That's a lot of money for a good idea (using large bodies of existing code to complete what the programmer write's) that everyone seems to be working on. Does Rice have some special advantage here that justifies the investment?
Post by Joe Armstrong of Erlang fame. Leader:
Here's a very impressive speedup on a program analysis problem by using a GPU.
I read a submission on HN that talk about Kdb+.
In the linked page it claim the interpreter is faster than C:
Kdb+ is a testament to the rewards available from finding the right abstractions. K programs routinely outperform hand-coded C. This is of course, impossible, as The Hitchhiker’s Guide to the Galaxy likes to say. K programs are interpreted into C. For every K program there is a C program with exactly the same performance. So how do K programs beat hand-coded C? As Whitney explained at the Royal Society in 2004, “It is a lot easier to find your errors in four lines of code than in four hundred.”
Now I'm a noob about compilers, but I understand that a)Interpreters are easier to code than compilers b)And them are slow, probably very very slow.
How can be faster? How do it?
I don't see how "“It is a lot easier to find your errors in four lines of code than in four hundred.”" can be the explanation.
Active forum topics
New forum topics