Promising OS's from a Programming Language Perspective

The topic "Choice of OS of LtU readers" asked the wrong question and hence got a sequence of boring I use Linux or I use Windows answers. So let's ask the Right Question.

What new OS shows most promise of ...
a) Doing "The Right Thing" (in the Programming Language Designers overwhelmingly over developed Sense of Right),
b) Being usable in the near term,
and, as such, merits our attention, support and dual boot disk space?

A quick bit of Googling and weeding of the results turns up...

Any other suggestions / commentary on these OS's most welcome.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Separation of concerns, please

Perhaps I'm overreacting, but I tend to take a dim view of projects (especially those intended for production environments; research projects are another matter altogether) which excessively integrate and couple programming languages with operating systems. Obviously, an OS must be implemented in some programming language, and must export an API in at least one programming language to be useful. C/C++ is a common choice these days for both chores; for better or worse.

However, when I wear my hat as a computer user (not necessarily a programmer), one thing I want is the ability to deploy and use software written in whatever language--it is the responsibilty of the OS designer to make sure that this is possible. I certainly don't want a system in which everything must be implemented in X, even if X is Smalltalk or Lisp or some other sensible language. :) Nor am I terribly interested in an OS if it has been designed by someone who only has domain expertise in PLT, or vice versa.

Regarding the OS's above: They all have interesting ideas in them; other interesting OS projects include things like Eros. But whatever OS I choose, I don't want it to constrain my choice of programming language.

As a TUNES.org member...

I agree. BriX is actually a tunes.org spin-off, although he doesn't understand half of what's written on the website.

In any case, even the perspective on the Tunes website indicates, however impenetrably, that the right approach is to integrate or at least connect programming language ideas into one environment, and then write systems software using this network of ideas rather than it being a language-OS per se.

The problem is that there's no research to support this, and hasn't been since Java dried up funding for anything interesting.

The work I do on Slate is intended to allow decoupling of design decisions, which will eventually include multi-lingual set ups for programming languages. But I have no idea what it will take to solve the problem of constraining language choice - Unix is, after all, designed for/in the C language, and every language working on that makes compromises to deal with that.

Research funding

The problem is that there's no research to support this, and hasn't been since Java dried up funding for anything interesting.

This is not true. The PyPy project is funded by the EU with an amount of 2Mio €. But Holger, Armin, Christian et al really tried to get funded and did not just lament that nobody loves them despite their great minds and ideas.

Touché

Yeah, these are both true, and Armin is also a former TUNES collaborator. Part of the general "strategy" of those who did work on the project was to contribute to other less ambitious projects, like work I did to assist SqueakMap's development or Common Lisp's asdf-install. Francois, the author, also writes other interesting CL packages, but should be doing more.

So, you're right, I can't be so one-sided about the whole thing.

TUNES definitely became an "all or nothing" project to its detriment, but the point of Slate was to make a compromise-level system.

Even with all that, the Java work was acquired at greater expense than it ought to have required, and there's a widespread tacit acceptance of this.

OS integration to the environment

I agree with Brian (Waters). But it's unfortunate that the Slate project has turned away many possible contributors because of unfortunate personalities.

Unfortunate Personalities

What would you suggest? Feel free to reply off-list if appropriate.

Hence my Question....

I want to be able to use the incredibly rich toolset that is available at my fingertips when I'm at my Debian Linux box. So that is where I live and work, in Debian Linux.

However, Programming Language Designers are all dreamers with incredible Hubris. We believe things can be better. We believe there is yet another Paradigm out there that will make things all a lot Simpler and more Elegant. (Worse, we have the tendency to use Too Many Capitals) We all have proto-language designs on the back burner that we endlessly fiddle with and tweak.

But anyone who has done this knows there are two very hard, very messy parts. One is talking to real, as delivered hardware (yuck-yuck-mega-yuck) and the other is grafting the ability to talk to the POSIX semantics universe onto your supremely clean and elegant language design. Blecchh!

Ugly Hardware we have to live with until Hardware designers grow religion, but POSIX semantics?

Something is only worth doing if it is really can change things. I really believe there is a better Programming language paradigm "Out There", and if it is used, it will change the very nature of what it is used on. ie. The very nature of OS's will be changed dramtically by coding them in a dramatically better language. If we can't, why bother designing a new language?

POSIX semantics is irretrievably wedded to C semantics, to step beyond that we are going to have to grow an OS that doesn't have that rich toolset at first. ie. An OS that won't be what we use most of the time at first.

Coyotos

Doesn't meet the qualification of options a) AND b), but Coyotos is pretty interesting from a PLT perspective, since they're using a custom designed language, BitC, to write it in (also discussed previously on LtU). I'd imagine it will incorporate ideas about capabilities from the E language.

"The Right Thing" == {Security, Usability, etc.}?

I think you beg the question, "what is the definition of The Right Thing?" The Venn diagram of what a language should support and what an OS should support would show that they overlap but are not identical. Given the way modern OSs make us suffer with viruses etc., I'd vote strongly for an OS that is foremost based on a real security model. So I'd vote for things like Coyotos, Inferno, etc. Any of the relatively main-stream OSs (Win, Mac, Linux) are pointeless from a phiolosophical perspective when it comes to security.

However, the most secure system in the world (fear Trusted Computing) is pointless if it doesn't support doing the things I need to do. That's a whole other ton of worms.

What's the OS's job?

I strongly agree with the creators' of Xok views on this issue. The OS is there to securely multiplex the hardware. Full stop. Everything else after that is someone else's problem and should not be in the kernel. Why? Anything you put in the OS, any abstraction you build, makes a trade-off imposed on all programs. Instead, take this first line of the page "Putting the Application in Control".

For PLT Geeks, Xok is not as interesting as some, but is still quite interesting. Use-wise, just imagine having essentially full access to the hardware: want memory-mapping support to support security and GC? Done; making a database application and that FS is getting in the way? Don't use it; Want fast very light-weight threads? Don't fake'em, use the hardware; Want to experiment with totally different ways of doing (what are usually considered) OS issues (e.g. file-systems, scheduling, networking)? You can without writing an OS. Implementation-wise, there are many interesting and ingenious ideas in it. I highly recommend reading how the filesystem works in Dawson Engler's thesis.

One thing to note, looking at many of the OSes listed, many of the OS projects out there, and the replies here, many people seem to believe that going in the exact opposite direction of Xok is the Right Way.

Re: the right job

From the Xok web site: Exciting graphs, but, then, "Last updated by Marc 5, 1998." It is just a reminder to myself that whatever an OS should do from anybody's perspective, to some extent it is clear that for the general public it has to be actually available and actually doing all sorts of fancy-pants stuff. Or maybe that just means it has to let the apps do the fancy-pants stuff, whatever. I'm just sad that market forces and humanity's lack of imagination seem to always add up to big hurdles that prevent better solutions from becoming real and viable 'products'.

Xok's status

I'm fairly confident Xok was always intended to be a research OS. And unfortunately, it's pretty unrealistic to expect any major new OSes period. There's just too many man-hours required to get to a point where hardware vendors would even look at supporting you and no one's going to use your OS if it doesn't support their hardware. It's just a vicious cycle and the effort to break out of it is enormous. The exokernel initiative was partially to deal with this issue in the context of OS research by making it much simpler to do OSy things with less effort. On the brighter side though, many ideas from Xok have been incorporated into major OSes.

Finally, they did seem to prove their point which suggests that there is value in their approach and the principles that guided it. This is better than many other approaches to OSes...

Xok is microkernelish

I don't think all projects are going in the exact opposite direction. I see Xok as a novel way of achieving the goals of a microkernel (reliability and flexibility).

Traditional microkernels put everything in user-level processes, letting you create whatever abstractions you want. But since they rely on heavyweight hardware protection, IPC is slow. Xok was an attempt to retain the flexibility but achieve better performance with a different abstraction architecture. Now, projects like BriX and Singularity use safe languages to give you efficient IPC, which makes some of the Xok techniques unnecessary. The implementation details are different, but the ultimate goals are the same.

Microkernels

The goal of Xok is not to "move things to user-level" it's to "move things out of the kernel" where "kernel" is defined to be anything you "can't change and can't avoid". As mentioned in some of the exokernel papers, (typical) microkernels don't do this. The things they move to "user-level" still cannot be changed or avoided and thus by the above definition of "kernel" are still in the kernel.

Incidentally, exokernels are less "a different abstraction architecture" rather the guiding principle is to get rid of abstraction. Further, while efficiency was important, Xok wasn't trying to "retain flexibility and gain performance" but trying to gain flexibility with adequate performance (preferably better performance by avoiding trade-offs). And it appears they did succeed.

Microkernels

Ah... I think I see what you're saying now. Most microkernels end up forcing everyone to use the standard FS process and the standard network process. I think there's a silver lining: there's nothing inherent in the high-level architecture that stands in the way. Much of the isolation work that has gone into microkernels can be repurposed to make life easier for the low-level hardware multiplexing in Xok.

An O/S is (or should be) a lot more than a kernel.

The primary concern of a application nowadays is how to manage (mainly distributed) programs, including sharing, accessing and management of data and of functionalities of programs. Unfortunately, no O/S offers a generic solution that makes it really easy to write, share and manage functionalities and data for everyone that uses that O/S.

The reason for that is not that it is difficult to do it; the reasons are economical and social: commercial O/S vendors are primarily interested in locking their customers in and therefore offer no solution in the basic package; open source O/S developers are not interested in that stuff usually.

A kernel alone does not approach even 1% of modern needs. In fact, a kernel alone is useless. What most people need from computers is a way to quickly enter information, store that information, later view it or process it to create new information, and do the whole thing either alone or in co-operation with others (who may be near or far away).

Unfortunately, there is no O/S that offers such a functionality to end users (either professionals who code for a living or for amateurs that want to cover simple needs), and that's the reason we have so many monolithic and inflexible environments.

An O/S should provide the basic protocols and mechanism for the above to be easily implemented. an O/S should promote sharing of data and functions and promote co-operation between programs. Writing a new program should be no more than specifying the data types and the functions that process those data. Without help from the O/S, this is not possible. That is the reason we have so many different monolithic environments that are very hard to cooperate, so many programming languages with little differences between themselves, so many problems.

I am not suggesting of course that "one shoe is for every foot". The kernel should be there for everyone to abuse the hardware in any way imaginable. But every O/S should at least provide an environment where the needs of most users are satisfied. As Steve Jobs once said about the NeXT O/S: "the NeXT O/S will do 80% of the work each application needs, and the application will do the other 20% which is the really interesting part and different from any other application".

All it should and all it does....

Hmm. Half of me agrees with you and half doesn't.

While an Exokernel could be a basis for an OS, I don't believe it can be all that an OS should be.

Look at it this way, if securely multiplexing the hardware is the only thing it should do, it is also all that it does.

A lot of POSIX semantics are around IPC communications, XOK does none of that leaving that to a posixy layer. So is the OS XOK or XOK + Posixy layer?

I agree it is a nice thing that you could have XOK with a XYZZY layer for any value of XYZZY but that just pushes the Question from which kernel should we use (Elkin's Answer - XOK) to which services layer should we use. (Short & Useless Answer: Not the posixy one)

The multiplexing of the hardware is almost the boring bit of OS's.

What we care about from the programming language perspective is at a far higher level. What are the semantics of IPC? (I don't want to communicate/persist a stream of bytes, I want to communicate a language primitve object) What are the semantics of persistence? What are the semantics of (yuck!) threading? (What can be reordered where) What are the semantics of security? (Think perl/ruby Taintedness)

The OS is the final arbitrator and controller of the system. How far does that extend? Just this CPU? Just this SMP motherboard? This beowulf cluster? The PC's on this LAN segment?

OS=Exokernel+PL RTS

What we care about from the programming language perspective is at a far higher level. What are the semantics of IPC? (I don't want to communicate/persist a stream of bytes, I want to communicate a language primitve object) What are the semantics of persistence? What are the semantics of (yuck!) threading? (What can be reordered where) What are the semantics of security? (Think perl/ruby Taintedness)

My answer is - the semantics of these is defined by a PL! Let PL runtime systems interface directly with the hardware tamed by exokernel!

PL should handle semantics, O/S should handle protocol.

To use your example, all persistence comes down to streaming bytes, so the O/S should provide the protocol for that. A host PL should take advantage of that protocol and provide persistence using the underlying O/S protocol.

Sticking this here

To everyone I do want to say that I agree that the OS is more than the kernel, and the title of my first post would more precisely be "What is the kernel's job?". Exokernels, as I have said, explicitly avoid doing the job of the OS, instead the OS becomes a library (or a collection of them) on top of the exokernel called, appropriately enough, a library OS. There is still no (necessary) overhead as exokernels allow one-sided trust, mutual trust as well as mutual distrust between processes. An application trusting a library OS (while the libOS doesn't trust the application) would be akin to the typical kernel setup of today. Further library OSes are the engine of portability as working directly with the exokernel could be extremely non-portable. Of course, you can always ignore some or all of any libOS.

So another title for my original post may be "What should the kernel not do?" (and should be pushed to the OS which is outside of the kernel). As far as what the OS's job is, I don't have a very strong conviction to any answer, but with an exokernel setup, you wouldn't need one. You don't like POSIX? Use some other libOS or write your own version of the things you need (using a POSIX layer or something else for the things you don't much care about). Want to do Andris Birkmanis's suggestion? You can while still running a POSIX layer in parallel. With an exokernel such things would be possible, the urgency with finding the Right One (which I don't believe exists in general) is vastly diminished, and experimentation would be much easier.

Singularity

Singularity from Microsoft looks interesting, especially the idea that all code is verified to be type-safe and that processes can only share memory in very limited ways.

Overall, I think the single most important thing is to get away from raw memory access in any non-driver code. This is especially true for code that accesses the kernel.

Following a close second is pure application isolation. Both the executables and the processes need to be isolated from one another except for well established means. In light of spy ware and viruses, we can no longer randomly graft on programs to other programs like we do in Windows.

Did you know that some windows device drivers literally add and alter buttons on the Display control panel? Why in the world did we give any random program the ability to alter any other random program any way it so chooses?

Sometimes you need a new button

Why in the world did we give any random program the ability to alter any other random program any way it so chooses?

Easy, sometimes you need a new button that the designers didn't dream of. There's a danger in locking things down too tightly as well. An OS with no way to grow quickly becomes a lot of wasted code.

And then the next version is released...

The decoy display control panel
Last time, we saw one example of a "decoy" used in the service of application compatibility with respect to the Printers Control Panel. Today we'll look at another decoy, this time for the Display Control Panel.

When support for multiple monitors was being developed, a major obstacle was that a large number of display drivers hacked the Display Control Panel directly instead of using the documented extension mechanism. For example, instead of adding a separate page to the Display Control Panel's property sheet for, say, virtual desktops, they would just hack into the "Settings" page and add their button there. Some drivers were so adventuresome as to do what seemed like a total rewrite of the "Settings" page. They would take all the controls, move them around, resize them, hide some, show others, add new buttons of their own, and generally speaking treat the page as a lump of clay waiting to be molded into their own image. (Here's a handy rule of thumb: If your technique works only if the user speaks English, you probably should consider the possibility that what you're doing is relying on an implementation detail rather than something that will be officially supported going forward.)

In order to support multiple monitors, the Settings page on the Display Control Panel underwent a major overhaul. But when you tried to open the Display Control Panel on a system that had one of these aggressive drivers installed, it would crash because the driver ran around rearranging things like it always did, even though the things it was manipulating weren't what the developers of the driver intended!

The solution was to create a "decoy" Settings page that looked exactly like the classic Windows 95 Settings page. The decoy page's purpose in life was to act as bait for these aggressive display drivers and allow itself to be abused mercilessly, letting the driver have its way. Meanwhile, the real Settings page (which is the one that was shown to the user), by virtue of having been overlooked, remained safe and unharmed.

There was no attempt to make this decoy Settings page do anything interesting at all. Its sole job was to soak up mistreatment without complaining. As a result, those drivers lost whatever nifty features their shenanigans were trying to accomplish, but at least the Display Control Panel stayed alive and allowed the user to do what they were trying to do in the first place: Adjust their display settings.

http://blogs.msdn.com/oldnewthing/archive/2006/01/10/511201.aspx

jNode

Singularity appears to be mostly a re-invention of the much older jNode project, which is making an entire (usable) OS in Java. Like Singularity only type-safe Java code is allowed, and the kernel is mostly written in Java too (with a bit of assembly). They have a GUI framework, command line etc ... worth checking out:

http://jnode.org/

:(

Actually, it seems that JNode is moving away from what I thought was the most interesting aspect, namely the lack of processes.

I thought an OS in which you had only threads, a single address space and type security to prevent objects getting access to other objects they shouldn't be able to access seemed pretty elegant and interesting, but I see for their next version they plan to implement Isolates which are basically Java processes. Code running in different Isolates cannot share objects :(

Isolates seem to have been developed for J2ME cellphones where the chips running them don't have hardware security or any MMU but they still want to replicate the standard OS process model.

Trust me...

...as an embedded systems developer with nearly 10 years of experience now on vxWorks, you do not want a single-address-space OS if you can avoid it. Especially for a machine which will be running multiple independent applications, or serving multiple users. Shared-state concurrency (a paradigm that such beasts heavily encourage, and often sell as their strong point) is IMHO an egregious anti-pattern--but a seductive one, which like the sirens of the Odyssey have lured many programmers to their doom. Even on vxWorks projects which attempt to restrain SSC by fiat (and vxWorks does provide an efficient message queueing mechanism, should you want to use it), inter-task clobbering is still entirely possible, and the OS does nothing to prevent it.

Another example of such an architecture is, of course, AmigaDOS. Many Amiga fans will sing the praises of the Amiga--a fine machine that did things in the 1980s that it took PCs fifteen years to catch up to. The single-address-space, pre-emptive kernel of the Amiga certainly had numerous advantages over the single-address-space cooperative kernel (essentially, processes mapped onto system-level coroutines) found in MacOS and Windows 3.1. But Amigas suffered from the same flaw as its peers, in that any misbehaving app could trash any other app, or the system itself. In 1985, the performance gains were probably more important than the reliability hit. But it's now 2006.

Now for an RTOS like vxWorks, or for a media-intensive OS like AmigaDOS running on a 68k processor, the above is often a reasonable design choice. Use of a MMU adds latency to a system, and may make the OS unsuitable for some applications. But if you're developing software which runs on a PC or a server...again, you don't want this sort of system.

Outside of specialized applications, it ain't worth the headache.

I may have misunderstood you then...

Single address spaces coupled with appropriate protection, I've no argument against. I just have lots of experience on single-address-space systems WITHOUT protection of any sort...

Vapour

Long dead, but still in the way-back machine: http://web.archive.org/web/20010202061100/http://ftp.rook.com.au/
Overview: If I recall, it was a plan for an OS with a single global namespace, and indeed everything ran in kernel-mode (no context-switch penalties!) and I think would just run atop a nanokernel. It used a LISP-alike language with a compiler which would guarantee that a program can or cannot touch certain areas. So basically it moved security from the kernel to the compiler. I was very interested in it back in the day (I think circa 1999-2000) but nothing became of it.

IBM's i5/OS (OS/400)

It implements a lot of neat ideas.
Inside the AS/400 by Frank Soltis is worth reading.
a Dr Dobbs article describing it.
A wikipedia article discussing the object system.