Function Interface Models for Hardware Compilation

Function Interface Models for Hardware Compilation. Dan Ghica.

The problem of synthesis of gate-level descriptions of digital circuits from behavioural specifications written in higher-level programming languages (hardware compilation) has been studied for a long time yet a definitive solution has not been forthcoming. The argument of this essay is mainly methodological, bringing a perspective that is informed by recent developments in programming-language theory. We argue that one of the major obstacles in the way of hardware compilation becoming a useful and mature technology is the lack of a well defined function interface model, i.e. a canonical way in which functions communicate with arguments. We discuss the consequences of this problem and propose a solution based on new developments in programming language theory. We conclude by presenting a prototype implementation and some examples illustrating our principles.

Ghica's work has been mentioned before, but not this particular article, which is a follow-up to that work. The paper touches on a number of common LtU topics: language design for hardware description, game semantics, Reynolds's SCI... I don't know much about hardware design, so I'd be interested to hear what people think.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Message to LtU editors

Please stop posting fascinating, must-read articles to the front page. I am too busy.

Seriously, I am very pleased to see this kind of methodological work, and am looking forward to reading this, together with papers [10] and [12] from the references section.

Hey, I wasted hours reading

Hey, I wasted hours reading about linear types yesterday 'cause of you...

By the way, I just looked at the homepage from top to bottom. Wow, what an amazing collection of ultra-interesting stuff. Go Lambda, go!

Interesting idea but...

Hi LtUers. I'm a long time lurker here. The hardware related articles are of particular interest to me since that's what I do for a living. So without further ado, some thoughts:

For starters the author's statement about the economics of FPGAs is incorrect for everything but very low volume (read one-off) designs. For moderate to high volume designs moving down a part can reduce parts cost substantially esp. since FPGAs are now SoCs in their own right and often constitute a large part of the parts cost so efficiency is very important.

The FIM concept is interesting, but whereas in software a single FIM is sufficient because the machine is fixed, hardware seems to require different FIMs for different situations. The req/ack model is not appropriate for all functions or even for most functions. Many data processing pipelines just 'push' data through without any information going backwards through the pipeline for example.

Finally, the whole idea of taking a software-ish algorithm and trying to compile it down to hardware seems flawed. When you design the practical implementation of an algorithm you have to take into consideration the underlying machine, and it's strengths and weaknesses. Even in fairly abstract languages like Haskell you have to consider realities such as cashing and IO buffering.

The way you implement a median filter in an imperative language is entirely different from one in hardware in large part because of the assumptions you make about the costs of different operations. For example, in software, compare and shift have the same cost. In hardware, especially in an FPGA architecture like Xilinx's where you have things like SRL macros, they've very different.

I'm still looking for a better approach. VHDL & Verilog are to hardware what C language is to software but approaching hardware design from a software point of view is also doomed to failure IMO.

I think the functional programming paradigm has a number of things going for it w.r.t HW design. Hardware has a natural separation between pure functions and state (flip flops, FIFOs, RAM) like FP. Strong typing & verification of pure functions are also is a big plus.

Behavioural SPecification Language

Some of your issues can be covered.

The FIM they describe in the paper is not proposed as "the One True FIM." In particular, the request/acknowledgement aspect comes from the particular game semantics they chose and that was mostly driven by the source language they chose. As they somewhat passingly mention, game semantics can be applied to languages like the pi-calculus. If they had used a source language more similar to the pi-calculus, there would not have been such a request/acknowledgement pattern except where explicitly used in the source language. The Syntactic Control of Interference language could be built on top of such a pi-calculus-like base.

The FIM they do describe is already type-indexed. There's nothing that precludes the use of a richer type system or, violating one of their principles, the use of information outside of the types to parameterize the FIM. They explicitly link the concept of FIMs to ABIs and FFIs. There quite definitely is not one single ABI/FFI used for software even on a single machine, let alone between machines, as the machine is certainly not fixed for software.

Finally, it is not their goal to be able to take arbitrary existing programs and just be able to compile them to hardware in any sane way, albeit their example was essentially that as it represents an extreme case. Their goal is a behavioural specification language, i.e. a language that is geared toward hardware specification but lacks any explicit means for specifying non-behavioural hardware details such as wiring and layout. There will of course be abstraction leakage as there is in any programming language.

Whether or not producing competitive hardware designs from a purely behavioural specification is feasible, I'm fairly confident that the relative costs of operations will not be a particularly large hurdle. Programmers can and have adjusted to such changes, which can often be dramatic, between different times, machine architectures, application domains, and even programming languages.