Second-order logic explained in plain English

John Corcoran, Second-order logic explained in plain English, in Logic, Meaning and Computation: Essays in Memory of Alonzo Church, ed. Anderson and Zelëny.

There is something a little bit Guy Steele-ish about trying to explain the fundamentals of second-order logic (SOL, the logic that Quine branded as set theory in sheep's clothing) and its model theory while avoiding any formalisation. This paper introduces the ideas of SOL via looking at logics with finite, countable and uncountable models, and then talks about FOL and SOL as being complementary approaches to axiomatisation that are each deficient by themself. He ends with a plea for SOL as being an essential tool at least as a heuristic.

The evolution of Rust

Graydon Hoare is the original developer of Rust even before Mozilla adopted it. For the 1.0 release he prepared a lightning talk on how the language changed over 10 years.
He only published some bullet points, but the topic list is interesting as well.

  • Six ways Rust is fundamentally different from how it started
  • Six ways Rust is fundamentally the same as how it started
  • Six things we lost along the way
  • Six things we gained along the way
  • Six things I'm irrationally, disproportionately pleased by

Read the full blog post for the content of the five lists.

Composite Replicated Data Types: eventually consistent libraries as non-leaky abstractions

Composite Replicated Data Types
Alexey Gotsman and Hongseok Yang
2015

Modern large-scale distributed systems often rely on eventually consistent replicated stores, which achieve scalability in exchange for providing weak semantic guarantees. To compensate for this weakness, researchers have proposed various abstractions for programming on eventual consistency, such as replicated data types for resolving conflicting updates at different replicas and weak forms of transactions for maintaining relationships among objects. However, the subtle semantics of these abstractions makes using them correctly far from trivial.

To address this challenge, we propose composite replicated data types, which formalise a common way of organising applications on top of eventually consistent stores. Similarly to a class or an abstract data type, a composite data type encapsulates objects of replicated data types and operations used to access them, implemented using transactions. We develop a method for reasoning about programs with composite data types that reflects their modularity: the method allows abstracting away the internals of composite data type implementations when reasoning about their clients. We express the method as a denotational semantics for a programming language with composite data types. We demonstrate the effectiveness of our semantics by applying it to verify subtle data type examples and prove that it is sound and complete with respect to a standard non-compositional semantics

Draining the Swamp: Micro Virtual Machines as Solid Foundation for Language Development

Draining the Swamp: Micro Virtual Machines as Solid Foundation for Language Development
Kunshan Wang, Yi Lin, Stephen Blackburn, Michael Norrish, Antony Hosking
2015

Many of today's programming languages are broken. Poor performance, lack of features and hard-to-reason-about semantics can cost dearly in software maintenance and inefficient execution. The problem is only getting worse with programming languages proliferating and hardware becoming more complicated. An important reason for this brokenness is that much of language design is implementation-driven. The difficulties in implementation and insufficient understanding of concepts bake bad designs into the language itself. Concurrency, architectural details and garbage collection are three fundamental concerns that contribute much to the complexities of implementing managed languages. We propose the micro virtual machine, a thin abstraction designed specifically to relieve implementers of managed languages of the most fundamental implementation challenges that currently impede good design. The micro virtual machine targets abstractions over memory (garbage collection), architecture (compiler backend), and concurrency. We motivate the micro virtual machine and give an account of the design and initial experience of a concrete instance, which we call Mu, built over a two year period. Our goal is to remove an important barrier to performant and semantically sound managed language design and implementation.
Inside you will find the specification of an LLVM-inspired virtual instruction set with a memory model (enables proper GC support) including a specification of concurrent weak-memory operations (reusing C(++)11, a debatable choice), relatively rich control-flow primitive (complete stack capture enabling coroutines or JIT-style de-optimization), and live code update.

Eve: the development diary of a programming environment aimed at non-programmers

In spring 2012 Chris Granger successfully completed a Kickstarter fundraising and got $300K (instead of the requested $200K) to work on a live-feedback IDE inspired by Bret Victor "Inventing on principle" talk. The IDE project was called Light Table. It initially supported Clojure (the team's favourite language) only, but eventually added support for Javascript and Python. In January 2014, Light Table was open sourced, and in October 2014 the Light Table development team announced that they decided to create a new language, Eve, that would be a better fit for their vision of programming experience.

There is little public about Eve so far, no precise design documents, but the development team has a public monthly Development Diary that I found fairly interesting. It displays an interesting form of research culture, with in particular recurrent reference to academic works that are coming from outside the programming-language-research community: database queries, Datalog evaluation, distributed systems, version-control systems. This diary might be a good opportunity to have a look at the internals of a language design process (or really programming environment design) that is neither academic nor really industrial in nature. It sounds more representative (I hope!) of the well-educated parts of startup culture.

Eve is a functional-relational language. Every input to an Eve program is stored in one of a few insert-only tables. The program itself consists of a series of views written in a relational query language. Some of these views represent internal state. Others represent IO that needs to be performed. Either way there is no hidden or forgotten state - the contents of these views can always be calculated from the input tables.

Eve is designed for live programming. As the user makes changes, the compiler is constantly re-compiling code and incrementally updating the views. The compiler is designed to be resilient and will compile and run as much of the code as possible in the face of errors. The structural editor restricts partially edited code to small sections, rather than rendering entire files unparseable. The pointer-free relational data model and the timeless views make it feasible to incrementally compute the state of the program, rather than starting from scratch on each edit.

The public/target for the language is described as "non-programmers", but in fact it looks like their control group has some previous experience of Excel. (I would guess that experimenting with children with no experience of programming at all, including no Excel work, could have resulted in very different results.)

Posts so far, by Jamie Brandon:

Some random quotes.

Retrospective:

Excited, we presented our prototype to a small number of non-programmers and sat back to watch the magic. To our horror, not a single one of them could figure out what the simple example program did or how it worked, nor could they produce any useful programs themselves. The sticking points were lexical scope and data structures. Every single person we talked to just wanted to put data in an Excel-like grid and drag direct references. Abstraction via symbol binding was not an intuitive or well-liked idea.

[...]

Our main data-structure was now a tree of tables. Rather than one big top-level function, we switched to a pipeline of functions. Each function pulled data out of the global store using a datalog query, ran some computation and wrote data back. Having less nesting reduced the impact of lexical scope and cursor passing. Using datalog allowed normalising the data store, avoiding all the issues that came from hierarchical models.

At this point we realised we weren't building a functional language anymore. Most of the programs were just datalog queries on normalised tables with a little scalar computation in the middle. We were familiar with Bloom and realised that it fit our needs much better than the functional pidgin we had built so far - no lexical scoping, no data-structures, no explicit ordering. In late March we began work on a Bloom interpreter.

October:

Where most languages express state as a series of changes ('when I click this button add 1 to the counter'), Eve is built around views over input logs ('the value of the counter is the number of button clicks in the log'). Thinking in terms of views makes the current language simple and powerful. It removes the need for explicit control flow, since views can be calculated in any order that is consistent with the dependency graph, and allows arbitrary composition of data without requiring the cooperation of the component that owns that data.

Whenever we have tried to introduce explicit change we immediately run into problems with ordering and composing those changes and we lose the ability to directly explain the state of the program without reference to data that no longer exists.

[...]

In a traditional imperative language, [context] is provided by access to dynamic scoping (or global variables - the poor mans dynamic scope) or by function parameters. In purely functional languages it can only be provided by function parameters, which is a problem when a deeply buried function wants to access some high up data and it has to be manually threaded through the entire callstack.

December:

Eve processes can now spawn subprocesses and inject code into them. Together with the new communication API this allowed much of the IDE architecture to be lifted into Eve. When running in the browser only the UI manager lives on the main thread - the editor, the compiler and the user's program all live in separate web-workers. The editor uses the process API to spawn both the compiler and the user's program and then subscribes to the views it needs for the debugging interface. Both the editor and the user's program send graphics data to the UI manager and receiving UI events in return.

FLOPS 2016, promoting cross-fertilization across the whole declarative programming and theory and practice

LtU generally is not appropriate venue for posting call-for-papers, but there have been exceptions, if the CFP has an exceptionally wide appeal. Hopefully FLOPS 2016 might qualify.
http://www.info.kochi-tech.ac.jp/FLOPS2016/

FLOPS has been established to promote cooperation between logic and functional programmers, hence the name. This year we have taken the name exceptionally seriously, to cover the whole extent of declarative programming, which also includes program transformation, re-writing, and extracting programs from proofs of their correctness. There is another strong emphasis: on cross-fertilization among people developing theory, writing tools and language systems using that theory, and the users of these tools. We specifically ask the authors to make their papers understandable by the wide audience of declarative programmers and researchers.

As you can see from the Program Committee list, the members have done first-rate theoretic work, and are also known for their languages, tools and libraries. PC will appreciate the good practical work. Incidentally, there is a special category, ``System Descriptions'' that FLOPS has always been known for. We really want to have more submissions in that category.

One can see even on LtU that there is some rift between theoreticians and practitioners: Sean McDermid messages come to mind. He does have many good points. We really hope that FLOPS will help repair this rift.

Pycket: A Tracing JIT For a Functional Language

Pycket: A Tracing JIT For a Functional Language
Spenser Bauman, Carl Friedrich Bolz, Robert Hirschfeld, Vasily Krilichev, Tobias Pape, Jeremy Siek, and Sam Tobin-Hochstadt
2015

We present Pycket, a high-performance tracing JIT compiler for Racket. Pycket supports a wide variety of the sophisticated features in Racket such as contracts, continuations, classes, structures, dynamic binding, and more. On average, over a standard suite of benchmarks, Pycket outperforms existing compilers, both Racket’s JIT and other highly-optimizing Scheme compilers. Further, Pycket provides much better performance for proxies than existing systems, dramatically reducing the overhead of contracts and gradual typing. We validate this claim with performance evaluation on multiple existing benchmark suites.

The Pycket implementation is of independent interest as an application of the RPython meta-tracing framework (originally created for PyPy), which automatically generates tracing JIT compilers from interpreters. Prior work on meta-tracing focuses on bytecode interpreters, whereas Pycket is a high-level interpreter based on the CEK abstract machine and operates directly on abstract syntax trees. Pycket supports proper tail calls and first-class continuations. In the setting of a functional language, where recursion and higher-order functions are more prevalent than explicit loops, the most significant performance challenge for a tracing JIT is identifying which control flows constitute a loop -- we discuss two strategies for identifying loops and measure their impact.

The Unison Programming Platform

Unison - a next-generation programming platform, by Paul Chiusano:

  • Programs are edited in a (browser-based) semantic editor which guarantees programs are well-formed and typecheck by construction
  • The codebase is a purely functional data structure
  • The program is a UI, and UI interaction is programming
  • Persistent data sources must be accessible via a high-level, typed API

An interesting project mentioned in a comment a few weeks ago, it now has its own website and a more descriptive abstract overview explaining it's core premises.

Previous posts on Paul's blog are also of interest, and some feature videos demonstrating some aspects of Unison.

BER MetaOCaml -- an OCaml dialect for multi-stage programming

BER MetaOCaml -- an OCaml dialect for multi-stage programming
Oleg Kiselyov
2010-2015-

BER MetaOCaml is a conservative extension of OCaml for ``writing programs that generate programs''. BER MetaOCaml adds to OCaml the type of code values (denoting ``program code'', or future-stage computations), and two basic constructs to build them: quoting and splicing. The generated code can be printed, stored in a file -- or compiled and linked-back to the running program, thus implementing run-time code optimization. A well-typed BER MetaOCaml program generates only well-scoped and well-typed programs: The generated code shall compile without type errors. The generated code may run in the future but it is type checked now. BER MetaOCaml is a complete re-implementation of the original MetaOCaml by Walid Taha, Cristiano Calcagno and collaborators.

Introduction to staging and MetaOCaml

The standard example of meta-programming -- the running example of A.P.Ershov's 1977 paper that begat partial evaluation -- is the power function, computing x^n. In OCaml:

let square x = x * x

let rec power n x =
  if n = 0 then 1
  else if n mod 2 = 0 then square (power (n/2) x)
  else x * (power (n-1) x)

[...]

In MetaOCaml, we may also specialize the power function to a particular value n, obtaining the code which will later receive x and compute x^n. We re-write power n x annotating expressions as computed `now' (when n is known) or `later' (when x is given).

let rec spower n x =
  if n = 0 then .<1>.
  else if n mod 2 = 0 then .<square .~(spower (n/2) x)>.
  else .<.~x * .~(spower (n-1) x)>.;;
A brief history of (BER) MetaOCaml

As MetaOCaml was being developed, new versions of the mainline OCaml were released with sometimes many fixes and improvements. The MetaOCaml team tracked new OCaml releases and merged the changes into MetaOCaml. (The MetaOCaml version number has as its base OCaml's release version.) The merge was not painless. For example, any new function in the OCaml compiler that dealt with Parsetree (AST) or Typedtree has to be modified to handle MetaOCaml extensions to these data structures. The merge process became more and more painful as the two languages diverged. For instance, native code compilation that first appeared in MetaOCaml 3.07 relied on SCaml, a large set of patches to OCaml by malc at pulsesoft.com to support dynamic linking. OCaml 3.08 brought many changes that were incompatible with SCaml. Therefore, in MetaOCaml 3.08 the native compilation mode was broken. The mode was brought back in the Summer 2005, by re-engineering the SCaml patch and implementing the needed parts of dynamic linking without any modification to the OCaml code. The revived native compilation has survived through the end.

[...]

BER MetaOCaml has been re-structured to minimize the amount of changes to the OCaml type-checker and to separate the `kernel' from the `user-level'. The kernel is a set of patches and additions to OCaml, responsible for producing and type-checking code values. The processing of built code values -- so-called `running' -- is user-level. Currently the user-level metalib supports printing, type-checking, and byte-compiling and linking of code values. Users have added other ways of running the code, for example, compiling it to machine code, C or LLVM -- without any need to hack into (Meta)OCaml or even recompile it.

[...]

By relying on attributes, the feature of OCaml 4.02, BER N102 has become much closer integrated with OCaml. It is instructive to compare the amount of changes BER MetaOCaml makes to the OCaml distribution. The previous version (BER N101) modified 32 OCaml files. The new BER N102 modifies only 7 (that number could be further reduced to only 2; the only file with nontrivial modifications is typing/typecore.ml). It is now a distinct possibility that -- with small hooks that may be provided in the future OCaml versions -- MetaOCaml becomes just a regular library or a plug-in, rather being a fork.