Haskell for Mac

Available here with hackernews and reddit discussions ongoing.

Even though I'm not a big fan of Haskell, I'm pretty excited about this. It represents a trend where PL is finally taking holistic programmer experiences seriously, and a move toward interactivity in program development that takes advantage of our (a) rich type systems, and (b) increasing budget of computer cycles. Even that they are trying to sell this is good: if people can get used to paying for tooling, that will encourage even more tooling via a healthy market feedback loop. The only drawback is the MAS sandbox, app stores need to learn how to accept developer tools without crippling them.

Reagents: Expressing and Composing Fine-grained Concurrency

Reagents: Expressing and Composing Fine-grained Concurrency, by Aaron Turon:

Efficient communication and synchronization is crucial for finegrained parallelism. Libraries providing such features, while indispensable, are difficult to write, and often cannot be tailored or composed to meet the needs of specific users. We introduce reagents, a set of combinators for concisely expressing concurrency algorithms. Reagents scale as well as their hand-coded counterparts, while providing the composability existing libraries lack.

This is a pretty neat approach to writing concurrent code, which lies somewhere between manually implementing low-level concurrent algorithms and STM. Concurrent algorithms are expressed and composed semi-naively, and Reagents automates the retries for you in case of thread interference (for transient failure of CAS updates), or they block waiting for input from another thread (in case of permanent failure where no input is available).

The core seems to be k-CAS with synchronous communication between threads to coordinate reactions on shared state. The properties seem rather nice, as Aaron describes:

When used in isolation, reagents are guaranteed to perform only the CASes that the hand-written algorithm would, so they introduce no overhead on shared-memory operations; by recoding an algorithm use reagents, you lose nothing. Yet unlike hand-written algorithms, reagents can be composed using choice, tailored with new blocking behavior, or combined into larger atomic blocks.

The benchmarks in section 6 look promising. This appears to be work towards Aaron's thesis which provides many more details.

OcaPic: Programming PIC microcontrollers in OCaml

Most embedded systems development is done in C. It's rare to see a functional programming language target any kind of microcontroller, let alone an 8-bit microcontroller with only a few kB of RAM. But the team behind the OcaPic project has somehow managed to get OCaml running on a PIC18 microcontroller. To do so, they created an efficient OCaml virtual machine in PIC assembler (~4kB of program memory), and utilized some clever techniques to postprocess the compiled bytecode to reduce heap usage, eliminate unused closures, reduce indirections, and compress the bytecode representation. Even if you're not interested in embedded systems, you may find some interesting ideas there for reducing overheads or dealing with constrained resource budgets.

Eric Lippert's Sharp Regrets

In an article for InformIT, Eric Lippert runs down his "bottom 10" C# language design decisions:

When I was on the C# design team, several times a year we would have "meet the team" events at conferences, where we would take questions from C# enthusiasts. Probably the most common question we consistently got was "Are there any language design decisions that you now regret?" and my answer is "Good heavens, yes!"

This article presents my "bottom 10" list of features in C# that I wish had been designed differently, with the lessons we can learn about language design from each decision.

The "lessons learned in retrospect" for each one are nicely done.

STABILIZER : Statistically Sound Performance Evaluation

My colleague Mike Rainey described this paper as one of the nicest he's read in a while.

STABILIZER : Statistically Sound Performance Evaluation
Charlie Curtsinger, Emery D. Berger
2013

Researchers and software developers require effective performance evaluation. Researchers must evaluate optimizations or measure overhead. Software developers use automatic performance regression tests to discover when changes improve or degrade performance. The standard methodology is to compare execution times before and after applying changes.

Unfortunately, modern architectural features make this approach unsound. Statistically sound evaluation requires multiple samples to test whether one can or cannot (with high confidence) reject the null hypothesis that results are the same before and after. However, caches and branch predictors make performance dependent on machine-specific parameters and the exact layout of code, stack frames, and heap objects. A single binary constitutes just one sample from the space of program layouts, regardless of the number of runs. Since compiler optimizations and code changes also alter layout, it is currently impossible to distinguish the impact of an optimization from that of its layout effects.

This paper presents STABILIZER, a system that enables the use of the powerful statistical techniques required for sound performance evaluation on modern architectures. STABILIZER forces executions to sample the space of memory configurations by repeatedly re-randomizing layouts of code, stack, and heap objects at runtime. STABILIZER thus makes it possible to control for layout effects. Re-randomization also ensures that layout effects follow a Gaussian distribution, enabling the use of statistical tests like ANOVA. We demonstrate STABILIZER's efficiency (< 7% median overhead) and its effectiveness by evaluating the impact of LLVM’s optimizations on the SPEC CPU2006 benchmark suite. We find that, while -O2 has a significant impact relative to -O1, the performance impact of -O3 over -O2 optimizations is indistinguishable from random noise.

One take-away of the paper is the following technique for validation: they verify, empirically, that their randomization technique results in a gaussian distribution of execution time. This does not guarantee that they found all the source of measurement noise, but it guarantees that the source of noise they handled are properly randomized, and that their effect can be reasoned about rigorously using the usual tools of statisticians. Having a gaussian distribution gives you much more than just "hey, taking the average over these runs makes you resilient to {weird hardward effect blah}", it lets you compute p-values and in general use statistics.

State of the Haskell ecosystem - August 2015

Interesting survey.

Based on a brief look I am not sure I agree with all the conclusions/rankings. But most seem to make sense and the Notable Libraries and examples in each category are helpful.

Using Commutative Assessments to Compare Conceptual Understanding in Blocks-based and Text-based Programs

Using Commutative Assessments to Compare Conceptual Understanding in Blocks-based and Text-based Programs, David Weintrop, Uri Wilensky. Proceedings of the eleventh annual International Conference on International Computing Education Research. Via Computing Education Blog.

Blocks-based programming environments are becoming increasingly common in introductory programming courses, but to date, little comparative work has been done to understand if and how this approach affects students' emerging understanding of fundamental programming concepts. In an effort to understand how tools like Scratch and Blockly differ from more conventional text-based introductory programming languages with respect to conceptual understanding, we developed a set of "commutative" assessments. Each multiple-choice question on the assessment includes a short program that can be displayed in either a blocks- based or text-based form. The set of potential answers for each question includes the correct answer along with choices informed by prior research on novice programming misconceptions. In this paper we introduce the Commutative Assessment, discuss the theoretical and practical motivations for the assessment, and present findings from a study that used the assessment. The study had 90 high school students take the assessment at three points over the course of the first ten weeks of an introduction to programming course, alternating the modality (blocks vs. text) for each question over the course of the three administrations of the assessment. Our analysis reveals differences on performance between blocks-based and text-based questions as well as differences in the frequency of misconceptions based on the modality. Future work, potential implications, and limitations of these findings are also discussed.

ACM Classic Books Series

This list of classic books is the result of a poll ACM conducted where members named their favorite computer science books.

Good list. Bar the last two, which I have nothing against, the list consists of favorites of mine. It is always nice to see how many classics of CS come from work on programming languages. Not a surprise for anyone here, of course, but not always acknowledged. While we are on the subject of classic books, check out Luke's twitter poll here.

Ceptre: A Language for Modeling Generative Interactive Systems.

Ceptre: A Language for Modeling Generative Interactive Systems.
Chris Martens
2015

We present a rule specification language called Ceptre, intended to enable rapid prototyping for experimental game mechanics, especially in domains that depend on procedural generation and multi-agent simulation.

Ceptre can be viewed as an explication of a new methodology for understanding games based on linear logic, a formal logic concerned with resource usage. We present a correspondence between gameplay and proof search in linear logic, building on prior work on generating narratives. In Ceptre, we introduce the ability to add interactivity selectively into a generative model, enabling inspection of intermediate states for debugging and exploration as well as a means of play.

We claim that this methodology can support game designers and researchers in designing, anaylzing, and debugging the core systems of their work in generative, multi-agent gameplay. To support this claim, we provide two case studies implemented in Ceptre, one from interactive narrative and one from a strategy-like domain.

Some choice quotes from the artice follow.

Simple examples of the rule language:

The meaning of A -o B, to a first approximation, is that whenever the predicates in A are present, they may be replaced with B. One example of a rule is:

do/compliment:
     at C L * at C’ L * likes C C’
  -o at C L * at C’ L * likes C C’ * likes C’ C.

[...]

Note that because of the replacement semantics of the rule, we need to reiterate everything on the right-hand side of the -o that we don’t want to disappear, such as the character locations and original likes fact. We use the syntactic sugar of prepending $ to anything intended not to be removed in order to reduce this redundancy:

do/compliment: $at C L * $at C’ L * $likes C C’ -o likes C’ C.

A more complex rule describes a murder action, using the ! operator to indicate a permanent state:

do/murder:
    anger C C’ * anger C C’ * anger C C’ * anger C C’
    * $at C L * at C’ L * $has C weapon
  -o !dead C’.

(This rule consumes C’s location, maintaining a global invariant that each character is mutually exclusively at a location or !dead.) Here we see a departure from planning formalisms: four instances of anger C C’ mean something different from one. Here we are using an emotion not just as a precondition but as a resource, where if we have enough of it, we can exchange it for a drastic consequence. Whether or not we diffuse the anger, or choose to keep it by prepending $ to the predicates, is an authorial choice.

Concurrency in narration:

Two rule applications that consume disjoint sets of resources from the same state can be said to happen concurrently, or independently. On the other hand, a rule that produces resources and another that consumes a subset of them can be said to be in a causal, or dependent, relationship. Less abstractly, if resources represent facts associated with particular game entities or characters, then independent rule applications represent potentially concurrent action by multiple such entities, and causally related rule applications represent either sequential action by a single actor, or synchronized interaction between two entities.

Stages, and a larger example:

We would like to for some of these rules to run automatically without player intervention. In our next iteration of the program, we will make use of a Ceptre feature called stages. Stages are a way of structuring a program in terms of independent components. Syntactically, a stage is a curly-brace-delimited set of rules with an associated name. Semantically, a stage is a unit of computation that runs to quiescence, i.e. no more rules are able to fire, at which point control may be transfered to another stage.

[...]

Additionally, we can test the design by “scripting” certain player strategies. For instance, we could augment the two rules in the fight stage to be deterministic, fighting when the monster can’t kill us in one turn and fleeing otherwise:

stage fight = {
  do_fight:
      choice * $fight_in_progress * $monster Size * $health HP * Size < HP
    -o try_fight.
  do_flee :
      choice * fight_in_progress * $monster Size * $health HP * Size >= HP
    -o flee_screen.
}

If we remove interactivity from this stage, then we get automated combat sequences that should never result in the player’s death.

Running Probabilistic Programs Backwards

I saw this work presented at ESOP 2015 by Neil Toronto, and the talk was excellent (slides).

Running Probabilistic Programs Backwards
Neil Toronto, Jay McCarthy, David Van Horn
2015

Many probabilistic programming languages allow programs to be run under constraints in order to carry out Bayesian inference. Running programs under constraints could enable other uses such as rare event simulation and probabilistic verification---except that all such probabilistic languages are necessarily limited because they are defined or implemented in terms of an impoverished theory of probability. Measure-theoretic probability provides a more general foundation, but its generality makes finding computational content difficult.

We develop a measure-theoretic semantics for a first-order probabilistic language with recursion, which interprets programs as functions that compute preimages. Preimage functions are generally uncomputable, so we derive an abstract semantics. We implement the abstract semantics and use the implementation to carry out Bayesian inference, stochastic ray tracing (a rare event simulation), and probabilistic verification of floating-point error bounds.

(also on SciRate)

The introduction sells the practical side of the work a bit better than the abstract.

Stochastic ray tracing [30] is one such rare-event simulation task. As illus- trated in Fig. 1, to carry out stochastic ray tracing, a probabilistic program simulates a light source emitting a single photon in a random direction, which is reflected or absorbed when it hits a wall. The program outputs the photon’s path, which is constrained to pass through an aperture. Millions of paths that meet the constraint are sampled, then projected onto a simulated sensor array.

The program’s main loop is a recursive function with two arguments: path, the photon’s path so far as a list of points, and dir, the photon’s current direction.

simulate-photon path dir :=
  case (find-hit (fst path) dir) of
    absorb pt −→ (pt, path)
    reflect pt norm −→ simulate-photon (pt, path) (random-half-dir norm)

Running simulate-photon (pt, ()) dir, where pt is the light source’s location and dir is a random emission direction, generates a photon path. The fst of the path (the last collision point) is constrained to be in the aperture. The remainder of the program is simple vector math that computes ray-plane intersections.

In contrast, hand-coded stochastic ray tracers, written in general-purpose languages, are much more complex and divorced from the physical processes they simulate, because they must interleave the advanced Monte Carlo algorithms that ensure the aperture constraint is met.

Unfortunately, while many probabilistic programming languages support random real numbers, none are capable of running a probabilistic program like simulate-photon under constraints to carry out stochastic ray tracing. The reason is not lack of engineering or weak algorithms, but is theoretical at its core: they are all either defined or implemented using [density functions]. [...] Programs whose outputs are deterministic functions of random values and programs with recursion generally cannot denote density functions. The program simulate-photon exhibits both characteristics.

Measure-theoretic probability is a more powerful alternative to this naive probability theory based on probability mass and density functions. It not only subsumes naive probability theory, but is capable of defining any computable probability distribution, and many uncomputable distributions. But while even the earliest work [15] on probabilistic languages is measure-theoretic, the theory’s generality has historically made finding useful computational content difficult. We show that measure-theoretic probability can be made computational by

  1. Using measure-theoretic probability to define a compositional, denotational semantics that gives a valid denotation to every program.
  2. Deriving an abstract semantics, which allows computing answers to questions about probabilistic programs to arbitrary accuracy.
  3. Implementing the abstract semantics and efficiently solving problems.

In fact, our primary implementation, Dr. Bayes, produced Fig. 1b by running a probabilistic program like simulate-photon under an aperture constraint.