User loginNavigation 
Breaking the Complexity Barrier of Pure Functional Programs with Impure Data StructuresBreaking the Complexity Barrier of Pure Functional Programs with Impure Data Structures by Pieter Wuille and Tom Schrijvers:
This paper is along the same lines a question I asked a couple of years ago. The idea here is to allow programming using immutable interfaces, and then automatically transform it into a more efficient mutable equivalent. Inside the Wolfram LanguageVideo of Stephen Wolfram showing off the Wolfram Language and sharing his perspective on the design of the language at Strange Loop conference. What's in store for the most widely used language by discerning hackers?Or, in other words, what's the future of Emacs Lisp (and unavoidable HN discussion). The original message contains some interesting tidbits. I am not sure how the discussion on emacsdevel will develop. But speculating about things such as Guile elisp is, of course, our bailiwick. An operational and axiomatic semantics for nondeterminism and sequence points in CIn a recent LtU discussion, naasking comments that "I always thought languages that don't specify evaluation order should classify possibly effectful expressions that assume an evaluation order to be errors". Recent work on the C language has provided reasonable formal tools to reason about evaluation order for C, which has very complex evaluationorder rules. An operational and axiomatic semantics for nondeterminism and sequence points in C
One aspect of this work that I find particularly interesting is that it provides a program (separation) logic: there is a set of inference rules for a judgment of the form \(\Delta; J; R \vdash \{P\} s \{Q\}\), where \(s\) is a C statement and \(P, Q\) are logical pre,postconditions such that if it holds, then the statement \(s\) has no undefined behavior related to expression evaluation order. This opens the door to practical verification that existing C program are safe in a very strong way (this is all validated in the Coq theorem prover). Luca Cardelli FestschriftEarlier this week Microsoft Research Cambridge organised a Festschrift for Luca Cardelli. The preface from the book:
Hopefully the videos will be posted soon. By Ohad Kammar at 20140912 10:10  Category Theory  Lambda Calculus  Misc Books  Semantics  Theory  Type Theory  4 comments  other blogs  3956 reads
Rethinking PrologA recent paper by Oleg Kiselyov and Yukiyoshi Kameyama at the university of Tsukuba discusses weaknesses and areas for improvement to Prolog.
The paper mentions the strength of the approach used by miniKanren (which embeds logic programming with fairer search strategy than normal Prolog into Scheme) and Hansei (which embeds probability based nondeterminism into Ocaml using delimited continuations to allow directstyle expression of monadic code). After motivating some choices by studying the prototypical example of running append backwards they cover running parsers with "maximal munch" rule backwards  something that cannot be (declaratively) expressed in prolog. A very interesting paper on logic programming! It also thanks Tom Schrijvers of CHR fame at the end. Scratch jrScratch jr is an iPad version of the Scratch environment, designed with young kids in mind. It is the best kidoriented programming tool I tried so far, and my five year old has great fun making "movies" with it. As I noted on twitter an hour after installing, the ability to record your own voice and use it for your sprites is a killer feature. Check it out! Scala woes?A fork in the back? See discussion over at HN. People in the know are encouraged to shed light on the situation. Howard on CurryHowardPhilip Wadler posts his exchange with William Howard on history of the CurryHoward correspondence. Howard on CurryHoward. Cost semantics for functional languagesThere is an ongoing discussion in LtU (there, and there) on whether RAM and other machine models are inherently a better basis to reason about (time and) memory usage than lambdacalculus and functional languages. Guy Blelloch and his colleagues have been doing very important work on this question that seems to have escaped LtU's notice so far. A portion of the functional programming community has long been of the opinion that we do not need to refer to machines of the Turing tradition to reason about execution of functional programs. Dynamic semantics (which are often perceived as more abstract and elegant) are adequate, selfcontained descriptions of computational behavior, which we can elevate to the status of (functional) machine model  just like "abstract machines" can be seen as just machines. This opinion has been made scientifically precise by various brands of work, including for example implicit (computational) complexity, resource analysis and cost semantics for functional languages. Guy Blelloch developed a family of cost semantics, which correspond to annotations of operational semantics of functional languages with new information that captures more intentional behavior of the computation: not only the result, but also running time, memory usage, degree of parallelism and, more recently, interaction with a memory cache. Cost semantics are selfcontained way to think of the efficiency of functional programs; they can of course be put in correspondence with existing machine models, and Blelloch and his colleagues have proved a vast amount of twoway correspondences, with the occasional extra logarithmic overhead  or, from another point of view, provided probably costeffective implementations of functional languages in imperative languages and conversely. This topic has been discussed by Robert Harper in two blog posts, Language and Machines which develops the general argument, and a second post on recent joint work by Guy and him on integrating cacheefficiency into the model. Harper also presents various cost semantics (called "cost dynamics") in his book "Practical Foundations for Programming Languages". In chronological order, three papers that are representative of the evolution of this work are the following. Parallelism In Sequential Functional Languages
Space Profiling for Functional Programs This paper clearly defines a notion of ideal memory usage (the set of store locations that are referenced by a value or an ongoing computation) that is highly reminiscent of garbage collection specifications, but without making any reference to an actual garbage collection implementation.
Cache and I/O efficient functional algorithms The cost semantics in this last work incorporates more notions from garbage collection, to reason about cacheefficient allocation of values  in that it relies on work on formalizing garbage collection that has been mentioned on LtU before.
By gasche at 20140814 11:53  Implementation  Lambda Calculus  33 comments  other blogs  22069 reads

Browse archivesActive forum topics
New forum topics 
Recent comments
3 hours 48 min ago
3 hours 52 min ago
4 hours 47 sec ago
4 hours 5 min ago
4 hours 11 min ago
4 hours 12 min ago
4 hours 21 min ago
4 hours 23 min ago
4 hours 31 min ago
4 hours 38 min ago