BEE3: Putting the Buzz Back into Computer Architecture

Chuck Thacker is building a new research computer called the BEE3.

There was a time, years ago, when computer architecture was a most exciting area to explore. Talented, young computer scientists labored on the digital frontier to devise the optimal design, structure, and implementation of computer systems. The crux of that work led directly to the PC revolution from which hundreds of millions benefit today. Computer architecture was sexy.

These days? Not so much. But Chuck Thacker aims to change that.

To understand the importance of such machines in computing history remember what Alan Kay says of the predecessor used to develop Smalltalk at Xerox PARC:

We invented the Alto and it allowed us to do things that weren’t commercially viable until the 80’s. A research computer is two cubic feet of the fastest computing on the planet. You make it, then you figure out what to do with it. How do you program it? That’s your job! Because real thinking takes a long time, you have to give yourself room to do your thinking. One way to buy time is to use systems not available to most people.

I want one!

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Hear, hear!

Architecture is bloody important. There's all sorts of neat stuff languages and operating systems (especially operating systems) could do if the architectures but allowed it instead of just pushing constantly for a faster x86.

Is it really the x86 that's to blame?

Figure out what it is that you want to do. Implement it on available hardware. See if it's too slow. Refine and optimize. At some point you'll hit the limit of where the hardware can take you, and custom chips are the next step.

I think FPGA-built hardware is an exciting area, but I'm always surprised at how impressively Erlang runs on a $500 Dell box.

F# -> FPGA

Buy one!

It wasn't very clear from the article, but you can get it at

Not even clear there

It's not even too clear from that website that you can buy one. The price is not listed and there's no information about how you would go about buying one. They list a sales e-mail on the contact page, but that's about it.

Cost depends on who you are


I knew these were going to be expensive, but $50K-$75K for non-academic buyers? Ouch.


That is an excellent article. It brought back a lot of feelings regarding my former foray into computer architecture research. The most relevant points are that a lot of the research didn't have any future and that much of it was not applicable to real-world situations. The latter is not so bad by itself, but along with the former meant that it would not eventually become applicable.

I certainly hope they contribute some revitalization to the field of computer architecture research. It needs it.

Thanks for post, Luke!

Engineering Research

“If you want to try out a new architecture or a new feature,” Thacker says, “it’s very difficult to do these days, because you have to actually build chips to build anything of substantial size. Using FPGAs, you don’t have to build chips..."


The viability of the FPGA approach was demonstrated last summer, when Zhangxi Tan, then an intern at Microsoft Research Silicon Valley, built a system for solving the problem of binary satisfiability, commonly used in design automation.

“He got much faster speed than what a computer could do,” Thacker reports, “because the algorithm is exactly suited for what can be done in FPGAs.”

I was involved with one 'cute' CPU arch research, and I got out of the field precisely for the first quote above. The thing with engineering research is that it's firstly about engineering, and you don't get a prize for the cutest block diagrams if the company with the dropped e in its name has chips that are better than yours, because engineering is about things like a huge corporate budget, voltage levels, crosstalk, processing technology, manufacturing, C compilers and drivers for power management as much as cute block diagrams drawn by a few people. Playing with FPGAs (for general purpose CPU research, not application specific stuff like the second quote) is just pretending.

But that's the point

Of course there are barriers to entry. The goal here is to lower the barriers, to provide a way for people to do their research before they have that huge corporate budget. With a BEE3, you can dive right in, without having to waste time on assembling the box your CPU lives in. If you come up with something interesting, then you can buy half a dozen BEE3s and get users to try it out. If it works out for them, then you can go looking for funding.

Now, sure, if what you've got competes against Intel, then your only way to get funding is probably to get Intel to acquire you. But, if you've got a niche Intel isn't interested in, you might be able to get VC money, or sell the technology to IBM, or something.

...but it might not be that cheap

It sounds like it might still be too expensive for this sort of scenario. RAMP is talking about enabling researchers to log into BEE3s at Berkeley over the Internet; that means they expect it to be expensive.

I've written the company to ask; if I hear back, I'll post here.

Are current machines inadequate for PLT research?

By "current machines", I mean off-the-shelf computer systems which a reasonably well-funded research organization can afford.

Certainly, there are ways in which off-the-shelf systems are sub-optimal. But the situation we have today is a lot less dire than the situation facing Alan Kay and his team at PARC, were the off-the-shelf computing technology of the 70s (ugly terminals if that, mainframes/minis) were inappropriate for what they were trying to build. (And Smalltalk is an unusual case because of the whole environment; an OO-based Lisp derivative with unusual syntax sans the Smalltalk environment certainly could have been hosted on a terminal of the time).

The Lisp Machine may be a more interesting case of an alternate computer architecture (especially down at the micro-architecture level).

But what cannot be done--or is only done with great difficulty--on current machines? The bottleneck on most modern computers these days isn't CPU speed, it's I/O latency and bandwidth. Parallelism is one way to attack the bandwidth limitation, and FP is a good way to make parallel computation more tractable; but existing computer architectures can be used to enable PL research here.


I suppose you could build an Erlang Machine. Single-assignment might mean you could optimize the cache and the pipeline in ways not available to x86.


Don't see it. Referential transparency helps concurrency. Single-assignment, maybe if you mean the same thing? But do you envision such an Erlang machine, you normally don't have an infinite supply of registers so, well, I don't see it?

Research Accelerator for Multiple Processors

By "current machines", I mean off-the-shelf computer systems which a reasonably well-funded research organization can afford.

Like RAMP?

"... little is known on how to build, program, or manage systems of 64 to 1024 processors, and the computer architecture community lacks the basic infrastructure tools required to carry out this research."

Yes, very much like RAMP

This is one of the machines that RAMP will be using (according to the article "Introducing RAMP Gold" on the RAMP page you linked to).

We just bought one

We just bought one of these at Sun Labs and it's pretty sweet. It makes a hell of a lot of noise and heat, though. It has its own office now ;)