archives

A Tiny Computer

A Tiny Computer. An unpublished memo by Chuck Thacker, Microsoft Research, 3 September 2007. Posted with permission.

Alan Kay recently posed the following problem:

"I'd like to show JHS and HS kids 'the simplest non-tricky architecture' in which simple gates and flipflops manifest a programmable computer”.

Alan posed a couple of other desiderata, primarily that the computer needs to demonstrate fundamental principles, but should be capable of running real programs produced by a compiler. This introduces some tension into the design, since simplicity and performance sometimes are in conflict.

This sounded like an interesting challenge, and I have a proposed design.

Presents the design of a complete CPU in under two pages of FPGA-ready Verilog. The TC3 is a Harvard architecture 32-bit RISC with 1KB of instruction memory, 1KB of data memory, and 128 general-purpose registers. This design is an ancestor of the DDR2 DRAM Controller used in the BEE3.

To help us into the brave new world of Hardware/Software co-design!

Advantages of pointfree?

In the famous paper "Can Programming Be Liberated from the von Neumann Style?", Backus describes the programming language FP. It has many properties that are clearly advantageous:

  • Programs are written in terms of a well-understood set of combining forms that preserve termination properties
  • Data structures are persistent and IO is well-structured
  • Functions are not first class objects making certain forms of reasoning easier
  • Programs are written primarily in terms of function composition, not application, which makes manipulation of programs easier (or at least, it does in my estimation)

There is one more prominent property: FP is a pointfree language. My question is, how advantageous is this really? Obviously it does make manipulation of programs easier in some sense, but its benefits seem to pale in comparison to the benefits derived from other properties (such as its first order nature).

There is also a big limitation that results from not having any sort of "substitution form": It is not possible to "lift" objects into functions. For example, if one wants to write a function that takes some value X and a list and multiplies every element in that list by X, there's no way to "lift" that X into the 'map' combining form. One must instead use 'distr' to transform the list into a list of pairs where one element of each pair is 'X', after which the multiplication function can be mapped over the list. Besides being a bit opaque, this is needlessly inefficient.

Of course, one could always write a new combining form to handle this particular situation (if the language were to permit it), but there's no way to write it in terms of the existing 'map' form without substitution or higher order functions (which would permit partial application); it must be written from scratch, and as such, can't trivially inherit any properties of 'map'. In some sense, the lack of substitution seems to work against the idea of programming with a well-understood set of combining forms.

This is hardly an attack on the idea of pointfree languages. I'm well-versed in concatenative languages (which typically do offer variables as an escape hatch), I've written non-trivial programs in FP, and I utilize the pointfree style in Haskell heavily. I do very much enjoy pointfree programming; I simply am having trouble justifying it in an objective manner.

All thoughts and pointers to relevant literature will be much appreciated. (To be clear, I'm looking for materials that directly address the benefits of doing away with variables, not the other properties of FP or SQUIGOL-like languages such as programming with a rich set of combining forms or morphisms.) More specifically, if anyone were able to comment on what one would lose if a substitution form were added to FP, you'd have my gratitude.

A New Approach to the Functional Design of a Digital Computer

A New Approach to the Functional Design of a Digital Computer by R. S. Barton, 1961.

The present methods of determining the functional design of computers are critically reviewed and a new approach proposed. This is illustrated by explaining, in abstracted form, part of the control organization of a new and different machine based, in part, on the ALGOL 60 language. The concepts of expression and procedure lead directly to the use of a Polish string program. A new arrangement of control registers results, which provides for automatic allocation of temporary storage within expressions and procedures, and a generalized subroutine linkage.

The simplicity and power of these notions suggests that there is much room for improvement in present machines and that more attention should be given to control functions new designs.

"One of the most amazing far reaching 4 page papers in our field" referenced in A Conversation with Alan Kay.

Lawvere Theories and Monads

Martin Hyland and John Power (2007). The Category Theoretic Understanding of Universal Algebra: Lawvere Theories and Monads. ENTCS 172:437-458.

Both monads and Lawvere theories provide characterisations of algebraic structure, with monads providing the more general characterisation. The authors provide an introduction to Lawvere theories, discusses their relationship to sets, and why monads became the more popular treatment.

Then they tackle the application of the theory to the semantics of side effects, where they argue that the generality of monads allow them to characterise computational phenomena that are not to do with side effects such as partiality and continuations, and argue that Lawvere theories more cleanly characterise what side effects are.

This paper is a good introduction to an important line of recent research done by Hyland&Power; cf. also the LtU story Combining computational effects.