Deconstructing Process Isolation

Deconstructing Process Isolation.
Mark Aiken; Manuel Fahndrich; Chris Hawblitzel; Galen Hunt; James R. Larus. April 2006

Most operating systems enforce process isolation through hardware protection mechanisms such as memory segmentation, page mapping, and differentiated user and kernel instructions. Singularity is a new operating system that uses software mechanisms to enforce process isolation. A software isolated process (SIP) is a process whose boundaries are established by language safety rules and enforced by static type checking. With proper system support, SIPs can provide a low cost isolation mechanism that provides failure isolation and fast inter-process communication. To compare the performance of Singularity’s approach against more conventional systems, we implemented an optional hardware isolation mechanism. Protect domains are hardware-enforced address spaces, which can contain one or more SIPs. Domains can either run at the kernel’s privilege levels and share an exchange heap or be fully isolated from the kernel and run at the normal application privilege level. These domains can construct Singularity configurations that are similar to micro-kernel and monolithic kernel systems.

The paper concludes that hardware-based isolation incurs performance costs of up to 25-33%, while the lower cost of SIPs permits them to provide protection and failure isolation at a finer granularity than conventional processes.

Maybe it's time to revist the language-as-os theme...

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

The way development should be?

This is great work in that it really addresses many of the big issues in development and takes a very "modernistic" aproach to them. Reliability, speed, efficiency, security all brought together by using system wide constraint propagation and theorem provers.
This is in my view the way code should be developed and where languages should go (spec#).
They have two video presentation, great fun to watch.

Go embedded!

This sounds like a great idea for the embedded market. Most CPUs there simply don't have MMUs, and with more and more complicated software run (concurrently) on embedded devices (and mobile phones), advanced software techniques could surely help.

I'm not sure if it's funny or sad that it's 2006 and we're still programming in C.

Not only C...

...but C++ as well!

This work is important, and SIPs could also be used in conventional programming languages to limit the type of processing allowed by certain parts of the code, thus increasing safety and correctness.

Modern application servers are like operating systems: they provide isolation, resource pooling, event management, communication and other stuff that O/Ses do, all based on software. Why operating systems are considered inappropriate for doing the job of application servers? could it be that the languages used for building programs for those O/Ses are unsafe?

[somewhat OT] OSes as application servers

Actually I've been wondering for a while why we use heavyweight application servers, that for instance encapsulate Java beans, when we could simply use communication techniques that reside in the kernel. By living without dozens of user-space libraries that transform to/from XML over several layers, things could be wildly fast.

This paper is the opposite direction, interestingly. It encourages us to simply live without the OS and its hardware resource protection. Of course both ways have their advantages (such as running totally untrusted code in a non-emulated sandbox).

To answer your question: I think there are just not enough skilled people to write operating systems. As Rob Pike said, systems software research is becoming irrelevant, while vendors write their software only for two or three left-over standard platforms. Everything has its own compatibility or portability layer now (even standard open source programs carry huge, complicated configure scripts and build mechanisms with them).

Apple tried to do Copland (to revive the old Mac OS), but they failed. BeOS is dead, and Zeta just went bankrupt. Other operating systems still struggle with the problem of hardware drivers.

What we have today (from a species kind of view), is just VMS (Win NT), Unix (Linux, BSD, Solaris), and NeXTStep (Mac OS). It's seems like either there's no architectural skill left, or it's just a conspiracy by Intel & Co, to sell us dozens of slow compatibility layers, so we end up buying new hardware ;)

Go Embedded!

Since you mention embedded in the post above I'm sure you're aware that we have more than the server/desktop OSes you mention. We also have ThreadX, VxWorks, uC/OS-II, and many others.

For many embedded platforms there's now also a tendency to implement a minimal HAL-like layer and then adapt various open-source components. Many of us can't afford running a whole OS so we cobble together what we need, burn it in flash, hit reset and go.

uCLinux

ucLinux (www.uclinux.org) is a big success on CPUs without MMUs. Their approach (quote from the FAQ): 'There is no memory protection. Any program can crash another program or the kernel. This is not a problem as long as you are aware of it, and design your code carefully.'

I think this is a bit optimistic but MMU-less operation is a serious need in cost-sensitive applications and alternatives like the one mentioned in the article will be needed.

uCLinux is what is used for

uCLinux is what is used for the Linux on Nintendo DS project. It works quite well on the DS: http://www.dslinux.org.

Erlang

This sounds like exactly how Erlang works, it scales to millions of virtual processes that all run in the same OS process.

Is isolation the only way?

Isolation is not the only way to deal with failures. It may be possible to recover a failed process. Even if a process can't be recovered a failure might require action in other processes, and so on.

What is the advantage over hardware?

From what I can see Singularity is a microkernel architecture with severely limited processes (must use a high level language and no dynamic loading of code) to try to speed up IPC. Otherwise the security model is the same; they isolate processes and use message passing to communicate. Is this really worth it over a microkernel architecture that uses hardware protection and allows processes written in assembly?

If I understand you

If I understand you correctly you are asking about the fundamental reasoning behind Singularity, not about the specific issue in this paper. If so, maybe this thread would be of interest to you.

Good question

They measure the overhead at roughly 37% for processes with lots of IPC. If you really write all your code in low-level languages without runtime protection (say, C + asm), then you might have an advantage, especially since you need protection anyway to run third-party, binary code.

OTOH the paper quantifies the cost of runtime-checks (in their C# dialect) as about 5%, so C wouldn't really be much faster (assuming that C and their C# have the same quality of code generation).

I like the idea of having the possibility to bypass OS protection domains, if you agree to use verified/verifiable code.

Costs of uK hardware protection

Microkernel architectures with multiple hardware-enforced protection domains tend to have performance problems. See Defending Against Denial of Service Attacks in Scout for one read-between-the-lines example, and my favorite performance paper ever, "Linux on the OSF Mach3 Microkernel" (Francois Barbou des Places. First Conference on Freely Redistributable Software, Cambridge, MA, 1996), for another.