Hardware Design and Functional Programming: a Perfect Match

Hardware Design and Functional Programming: a Perfect Match by Mary Sheeran, Journal of Universal Computer Science, Special issue devoted to the Brazilian Symposium on Programming Languages, 2005.

This is a slightly odd paper that explains why I am still as fascinated by the combination of functional programming and hardware design as I have ever been. It includes some looking back over my own research and that of others, and contains 60 references. It explains what kinds of research I am doing now, and why, and also presents some neat new results about parallel prefix circuits. It ends by posing lots of hard questions that we need to answer if we are to be able to design and verify circuits successfully in the future.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.


Prof. Sheeran's Wired that describes the circuit design along with its layout (which is, IMO, the "harder" problem) is also interesting reading.

I'm not able to recall it right now, but there is this Dutch startup firm who developed a CSP-based HDL, which would be an interesting comparison.

HDLs and FPGA hacking

Are chip-making startups using homebrew HDLs to design great hardware and win big? That sounds more pleasant than designing a HDL and struggling to talk other people into using it.

I've chatted with a couple of hardware/FPGA people recently and my initial impression is that basically everybody in industry is using VHDL and that the edit-compile-run cycle on FPGAs is typically hours.

If anyone has practical pointers or HOWTOs on FPGA hacking for language nerds I'd be curious to know. The best site that I've seen is http://www.fpga4fun.com/ which has a bunch of VHDL-based examples that require a vendor-specific toolchain to build.

Chip-making startups

are (in my limited experience) more concerned with getting their chips working, which is easier to do with existing tools. Writing out or generating HDL for a particular block might even be one of the easier steps in getting something onto silicon. (In fact, I'd argue that reuse has been more successful for these blocks than for software.)

Bluespec or Handshake's Haste might be better than HDLs, but as you say, getting people to use them has not been completely successful.

Regarding FPGA hacking, I wish I had an answer :) I'm sure you've already seen Lava, which might be better than writing out code in an HDL. You still need the vendor-specific toolchain, though.

Bluespec SystemVerilog for FPGA hacking

Disclaimer: I work for Bluespec. I hope this doesn't come across as spam.


As a fellow language nerd, you might be interested in Bluespec's tools. Our source language is SystemVerilog with compositional atomic transactions instead of always-block semantics -- as you can imagine it is much easier to reason about guarded atomic actions than always blocks. The language and libraries have lots of FP goodies, type checking / inference, Haskell-style type classes, etc. This presents a pretty steep learning curve for some HW desginers, but it will probably seem natural to you.

The language fits broadly into the category of high-level synthesis tools. But whereas all others in the HLS space attempt to infer efficient parallel hardware architecture from a C-like language with sequential semantics, sometimes with a thread-and-lock concurrency model (which we think is a hopeless task), Bluespec's semantics are aimed at enabling designers to produce parallel designs at a high level without losing the ability to reason about system behavior. This is a very interesting area to be working in, at the intersection of PLT and HW design.

The tool generates Verilog as output, which can be used with the synthesis tools provided by FPGA vendors or third-party solutions like Synplify. Since it doesn't replace the FPGA synthesis & PR tools, it won't reduce the edit-compile-debug cycle time. But in our experience it does reduce the number of iterations required to get the HW right.

For some good examples of FPGA designs done by experienced Bluespec users, check out the winning entries of the last two MEMOCODE design contests here and here (I don't know if the 2008 code is available for download yet, but there is a paper describing the architecture in the proceedings, or you can contact the guys at MIT directly).

Bluespec's tool suite is commercial, but licenses are provided for academic use by just filling out a form.


I've been working in IP start-ups and I've heard of Bluespec of course.

That's an interesting technology although it has to compete with proprietary flows developed around DSLs (say a language/flow targeted at VLIW processors). Given the good knowledge in language/compiler development today, it is not too complex for a company to develop a new DSL that will make a better job at describing and generating hardware for a specific application. A lot of chip-makers design the system and buy the actual hardware to IP providers, so all they want to see is verilog.

But I think FP is very welcome in these new DSL..

No, not spam!

This was very interesting!

I agree. I thought that was

I agree. I thought that was obvious from all the responses.

In general, it is quite all right to report on things you are working on when they are relevant to the discussion and when technical information is presented, not marketing.


It was interesting to see my paper here and to read the comments. It feels as though quite a lot has happened in the last few years since the paper appeared. For instance, Emil Axelsson's thesis about Wired was finished not long ago, and will be defended on Sept. 9, 2008. The work has come on enormously in the last year (with the recent work described in Part II). I think that Emil's elegant use of functional programming techniques will make the thesis interesting for programming languages people. I would argue that we will need to use similar ideas even when we attempt to work at a higher level of abstraction.

The next step for Emil (and our group) will be to work with Ericsson and Galois, Inc. on developing a DSL for Digital Signal Processing algorithm design. Any pointers to related work would be greatly appreciated.

Chip-making start-ups...

...are usually using VHDL and/or verilog and/or SystemC (there are also new languages as SystemVerilog). Very few would develop their own hardware description language. All the FPGA/ASIC tools will take HDL-like languages as input, that's a standard.

Few silicon IP start-ups however develop their own, abstracted description of the hardware they want to design (it is often based on a new or customized language, so FP could play a role here). They then generate verilog or VHDL from this high-level description.

I've read the article and it looks to me that it has a really narrow focus (e.g. low-level hardware implementation). Nowadays, the challenge in hardware design is more at the system and verification level.

I'm a hardware engineer, so if you have more specific questions, feel free to ask.

Functional HDLs

Disclaimer: I work for Bluespec.

There are a number of interesting functional HDLs around. In addition to Prof. Sheeran's Lava and Wired languages, Intel produced the ReFLect language (LtU thread here), there is a scheme-to-HW compiler called Shard, and I work for a company that makes a compiler from SystemVerilog extended with atomic transactions and FP constructs to HW.

Edit: I should also mention Esterel, which has process-algebra roots, but I think should be included in the discussion of FP HDLs.

MEMOCODE 2008 Design

The MIT team has uploaded their design to OpenCores, and you can find the code and design description here.

Edit: This is in reply to my own earlier reply to Luke, in which I referenced this design but couldn't provide a link to the code. I mistakenly replied at the top level instead of to that specific comment.

Other Strands of Relevant Research

Siskind, J.M., Southard, J.R., and Crouch, K.W., "Generating Custom High Performance VLSI Designs from Succinct Algorithmic Descriptions," Proceedings of the Conference on Advanced Research in VLSI, pp. 28-40, January 1982.


Thanks very much for all the fascinating info! I didn't expect such a great response :-) the Bluespec stuff looks particularly interesting.

No such discussion is complete....

.... without the mention of atom and MyHDL

I've taken fpga designs to market with both, generally with myhdl resulting in a shorter time to productivity and atom producing more efficient circuits. (It goes without saying that ymmv!)

Edit: I should add that I am also nearing 10 years experience in traditional HDLs, and wouldn't today be crazy enough to start a design directly in either Verilog/SystemVerilog or (shudder) VHDL.

Wow yet more major stuff

Wow yet more major stuff :-)

People seem to be doing a lot with hardware development at the moment. But is anyone working on speeding up the edit-recompile-run loop on FPGAs down from hours to seconds? If so, who? And if not, why not?

Lots of people are working on it....

but synthesizing and routing an FPGA or ASIC is an NP-hard problem. And for typical designs, the number of features/inputs is large. It's just a nasty, nasty problem.

Think of an optimizing compiler for your favorite programming language that had to produce a solution to a given problem, but was constrained by an absolute maximum code size, as well as a constraint on execution time. SW tools get to (usually) ignore the difficulty of hard-core optimizing by simply not doing it--limiting themselves to "easy" optimizations, and asserting it's the users problem if the resulting program is too big or too slow.

In the HW world, where you have a limited number of gates, a limited number of routing resources, and the resulting design has to meet timing and power consumption constraints--you don't have that luxury. A design which doesn't fit into the selected part or constraints is simply not acceptable.

One of the dirtier tricks of modern FPGA/ASIC tools--is our friend, nondeterminism. Tools like Quartus allow the designer to change the seed of the pseudo-RNG that provides the compilation process with its source of nondeterminism; I've on numerous times heard my poor colleagues on the HW side mention that they had to "change the seed" to get a design to fit. When you're grasping at that sort of straw, you have my deepest sympathy. Especially when it takes an hour or two to see if changing the seed worked.

The difficulty is partially due to architectural trade-offs

The difficulty of the FPGA place-and-route problem is partially due to current FPGA architectures, which are optimized for capacity, performance, flexibility, etc. Compilation time is lower on the list since the number of recompiles is small relative to the number of FPGAs shipped. Perhaps there is a market for an FPGA optimized for fast synthesis & PR, or better yet, a pair of compatible FPGAs: one for fast development and one for high-volume production.

As an example of a different point in the architecture space which provides fast compile times, Quickturn developed (licensed from IBM) a hardware emulation platform based on a large number of digital logic processors connected with a novel, very dense interconnect. Compared to the systems of interconnected FPGAs used in other emulation systems, the compile times were dramatically faster although the emulation frequencies were somewhat lower. This example is at the very high-end (many $100K per system), but it's enough to make me think that a similarly innovative approach could change the compile-speed/clock-frequency curve in the FPGA space.

I'm no expert on the economics of FPGAs, so maybe this is a real unserved market, or maybe there is not enough money to be made with such a system to justify the development costs. I assume that if FPGAs specialized for fast compiles were profitable the FPGA vendors would be making them.


I wonder if place-and-route would make a good MEMOCODE problem :-)

Fast Place and Route Approaches for FPGAs was mostly over my head but perhaps I'm slowly starting to get an idea of how FPGAs work and how PR is done.

Incidentally would DPL-FPGA make a good first FPGA for the Macbook-based hacker on the move?

A better choice (IMESHO)

would be to check out Opal Kelly's development boards. The XEM also uses the Spartan 3 line, and their "Frontpanel" software is especially conducive to learning the technology. The fifo-over-usb primitives are extremely versatile. I have about a half dozen of these boards laying around the workshop, now, and find them invaluable for experimentation.

"In the HW world, where you

"In the HW world, where you have a limited number of gates, a limited number of routing resources, and the resulting design has to meet timing and power consumption constraints--you don't have that luxury. A design which doesn't fit into the selected part or constraints is simply not acceptable."

Not to mention I/O pin assignments and power levels, I/O drive strength and slew, ground stability/simultaneous switching noise, thermal concerns, power-on current draw, path delays leading to race conditions/skew, this list goes on and on....

Let's not even touch on the mess that the recent addition of more "hard" IP into the mix creates.... Plop a PowerPC core, different types of block rams, dsp/multiplier arrays, etc etc... in the middle of this madness that you now *also* have to deal with...

Oh, and by the way, none of these issues are taken into consideration by the compiler/synthesis tool... It will give you a report, and tell you if anything is outside of the spec you feed it, but that's usually about the extent of it. It's usually up to you to resolve any issues that arise. Yes, there is that whole "changing the seed" thing, but that's hardly a solution. Usually it comes down to good ol' fashioned floor-planning, at which point you're nearly back to laying out an asic in cmos schematic...

So, really, what little tooling *is* there actually *does* very little for you, aside from performing a probabilistic search over a layout solution space.

These higher level HDLs do solve *A* problem (inexpressive source language) but do little towards solving *THE* problem (an extremely temperamental execution context) other then freeing up your mind from HDL concerns in order to stay sane facing the real challenges. They do that just "well enough" to make them worth it, so far.

To quote a recent FPGA Journal article...

"Of course, all those efforts have failed to pull many but the hardest-core performance-hungry over the chasm into the realm of FPGA-based reconfigurable computing. Unless you want to learn HDL or trust a bleeding-edge software-to-HDL compilation/synthesis tool, the task of getting your critical algorithm onto FPGA hardware was untenable - even for high-IQ cognoscenti like rocket scientists, cryptographers, and genetic engineers. When those folks are afraid that your solution is “too complicated to use,” you have something of a problem."


I could easily make software sound more scary than that :-)