Virtual Machines, Language Runtimes, and the Future of Objective C

In a series of three articles last year, John Siracusa at arstechnica argued that Apple has an impending crisis on its hands because it doesn't have "A memory-managed language and API" like Java or Microsoft's CLR. My question is less about whether this is true, and more about what it means to be "managed".

Specifically, there seems an implicit assumption that Java or .NET type architectures represent what "safe", garbaged-collected systems should look like.

In the case of Apple, couldn't some new, hypothetical system language(s) be based on the existing Objective C runtime? Or more generally, do the runtimes for systems written in lower-level languages like GObject/GTK or Objective C (if garbage collected) provide enough services and metadata to build more dynamic languages on top of them, and still provide object compatibility-- to not require an FFI to communicate with the older existing framework?

If that were in fact the case, wouldn't it be possible to gradually turn systems like Objective-C/Cocoa or C/GTK into something closer to Smalltalk or (god forbid) Java?

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

A real problem?

This isn't a real problem. Both Java and .NET are available on Mac. And if they weren't, it wouldn't really matter because major multiplatform applications simply aren't written in Java or .NET (by applications, I mean applications - not servelets, scripts, in-house database front-ends, etc which are plentiful on Java and C#).

Why aren't major applications written in Java or .NET? Because Java, the Sun Vendor Agenda Language, is seen as the bastard stepchild language on Windows. And .NET, the Microsoft Vendor Agenda Language, is avoided by people writing multiplatform applications. So, ultimately, everyone avoids using Vendor Agenda Languages for major applications.

So it really doesn't matter.

Also bear in mind that the implicit memory management of Java and C# provide a productivity gain for developers, of something like 20-30%. This isn't anything fundamental, and if porting a .NET language to Linux or Mac turns out to be a major effort, then that 20-30% can easily be lost.

Not Only That...

...but the 800 lbs. gorilla here that keeps getting glossed over is OS API bindings. There are plenty of worthwhile "memory-managed" languages, whether you believe Java and C# are among them or not. But without comprehensive, high-quality bindings to the various platforms' APIs, particularly for building GUIs, they won't see mainstream commercial use. Contrast, for example, the quality of anything done for Mac OS X with PyObjC vs. Java + even SWT, nevermind Swing.

So in some sense, what's being asked for is a return to the days of the Lisp Machine, when the entire OS, from device drivers to UI, was in something other than C or assembly language. That's a thought that I can get behind, but who knows if/when we'll ever see the like commercially again? In the meantime, we rely on memory-managed languages to offer high-quality FFIs, and on someone to use those FFIs to provide high-quality wrappers for our platforms of choice.

major multiplatform applications?

Please provide some examples of what you mean by "major multiplatform applications".


I'm not sure whether it's a relevant point, but several of the main developers of LLVM, including Chris Lattner, now work at Apple. It appears that LLVM has decent support for GC..


From the Apple GCC manpage:

           Enable garbage collection (GC) for Objective-C objects.  The
           resulting binary can only be used on Mac OS X 10.5 (Leopard) and
           later systems, due to additional functionality needed in the (NeXT)
           Objective-C runtime.

           When the -fobjc-gc switch is specified, the compiler will replace
           assignments to instance variables (ivars) and to certain kinds of
           pointers to Objective-C object instances with calls to interceptor
           functions provided by the runtime garbage collector.  Two type
           qualifiers, "__strong" and "__weak", also become available.  The
           "__strong" qualifier may be used to indicate that assignments to
           variables of this type should generate a GC interceptor call, e.g.:

                     __strong void *p;  // assignments to 'p' will have intercep
tor calls
                     int *q;            // assignments to 'q' ordinarly will not
                     (__strong int *)q = 0;   // this assignment will call an in

           Conversely, the "__weak" type qualifier may be used to suppress
           interceptor call generation:

                     __weak id q;      // assignments to 'q' will not have inter
ceptor calls
                     id p;             // assignments to 'p' will have intercept
or calls
                     (__weak id)p = 0;   // suppress interceptor call for this a

This could be viewed as a stop-gap.

I was aware that ObjC was getting garbage collection. This could be viewed as a stop-gap measure, or "good-enough" for Apple.

However, I was hoping to get more feedback (since I am honestly curious about this) about the object models provided by the likes of GObject, the Objective C runtime, and even Qt, and their suitablity in supporting higher level languages. Can that be bolted on, or does it have to be designed in in the first place?

i'd like to revive this question...

Could a language be either created or bolted on to Objective-C (outside of Apple obviously) which integrates with OSX/Cocoa libraries seamlessly?

Are there any reasons why a language with static typing, closures, pattern matching, DSL adaptable syntax, futures, promises, FRP, etc., etc., etc. couldn't be created to work so well with existing OSX libraries and run-times that someone would pick it as the obvious choice(say Adobe's software or the next version of MS Office)?

By the way, I'm referring only to technical and theoretical obstacles, not political ones.

OpenMCL As Well

OpenMCL also has pretty seamless bindings to the Cocoa stuff:

? (require 'cocoa)
... stuff ...
? (defvar *W* (make-instance 'ns:window))

*poof* -- A cocoa window is displayed.


It turns out that I needed to google for objective-c runtime. This document describes the run-time.

Back To The Future

So in some sense, what's being asked for is a return to the days of the Lisp Machine, when the entire OS, from device drivers to UI, was in something other than C or assembly language. That's a thought that I can get behind, but who knows if/when we'll ever see the like commercially again?

While I can't speak for the commercial world, this is the vision being pursued by The Institute for End User Computing, Inc.

We believe that to truly fullfil the promise of the Personal Computing Revolution, and get the many enabling technologies that have been stagnating in our labs over the last three decades into the hands of ordinary End Users, we need to finally bite the bullet and stop trying to develop multi-platform solutions that run under today's infrastructure.

Now is the time to create a new legacy free platform (at least at the os/application level — we will still support your old Data going forward) based on everything we know today that puts the best programming language technology at its core. We need to be able to code everything in a unified multiparadigm metalanguage with a transparent human-friendly notation so anyone writing code for the new platform will be able to read and work with everyone else's code.

If we can make it easier for you to express your best ideas in such a new language than it would be to port your existing code in a legacy language and if we can offer a critcial mass of enabling technologies to make the new platform useful as a research tool, then with luck industry will adopt it!

We plan to start ramping up our research efforts in the New Year, so we would be thrilled to hear from any potential volunteers interested in the language design project as well as any perspective members for our Board of Directors. Just drop me at note at:

executive hyphen director at ieuc dot org

I don't want to be tied to a single language

Languages evolve.


Yet consider how many systems have not changed languages: the old saw of there being many systems still running in COBOL. The gamut seems to be:

1. allowed to choose any language, but really any given system is stuck in some paradigm, although the computer as a whole can be a heterogenious world - and therefore doesn't get any of the more-than-the-sum-of-the-parts fun.

2. not allowed to choose any language, but also not in a world that is flexible enough so that you are getting a benefit by using that single language.

3. same as #2 but at least it is something like Smalltalk where you are supposed to re-invent the language around the core every year. Really.

4. same as #3, but with a really integrated world a la LISP Machines or whatever your particular grail is.

5. complete and utter freedom, yet somehow managing to let things leverage off one another, so you get the multiplying effect on productivity. How can that be done?

Hence The Notion of a Metalanguage

Are you sure you don't mean that you don't want to be tied to a single paradigm?

We would rather have one language with multiple paradigm support as in Leda or Mozart/OZ (or any LISP/Scheme given the right libraries) than to have a choice of N different languages all imposing an imperative or object oriented design to maintain compatibility with some shared runtime.

We are all for being able to skin your source code to look like your personal notation of choice, but in the truly hetrogenous world with no support for machine translation of surface structure that means that anyone trying to fix or extend your code needs to learn that notation, which is a worse problem the closer the notations if they don't share exactly the same semantics.

With a true metalanguage, (i.e. one supporting multiple paradigms with some manner of macros and syntax rules) you can literally have your cake and eat it too in a single language.

Given that our understanding of programming paradigms and models of computation has progressed considerably since the days of the original LISP Machines, the time is ripe to revisit its fully integrated approach by extending its language model to include explicit paradigm specific dialects and syntactically compatible domain specific notations. That way we can deal with the impedance matching once and make sure that all the code can play together and be reduced to a small set of kernel languages that type theoriests and formal methods researchers can work with.

So if you want you can take the one language, define your own "Semantic Block" (i.e. DSL) and code with any sane set of options for the semantics/paradigm that makes the most sense for solving your problem at hand without loosing the ability to tape other chunks of functionality that were implemented with other paradigms.

I hope this helps to clear up our approach.

one ring to rule them all

Having worked on the Slymbolics :) and being a programming language guy by nature, I'm totally hooked on the approach of one language all the way down, and a kick-ass macro system for DSLs. The integration of the lisp listener with zmacs, the debugger, and hypertext doc was astounding. Every new release was easy to absorb because one knew what to expect. We'd have machines running for months without booting, dynamically linking in new code as needed.

You can see the power of this approach in Emacs, which is infinitely extensible because everything is elisp, save a small kernel that implements the interpreter.

I'm not sure it will ever be possible given the logistics and economics of programming organizations. In some sense "Worse is better."

Best of luck with your efforts at IEUC.

I've found programming in

I've found programming in ObjC (without the GC) a rather nice middle ground between fully garbage collected systems and explicit memory management like in C++. One of the major gripes I have with C++ is that you can't safely write a function that returns a pointer to an object as the result. You can make the pointer a smart pointer, yes, but smart pointers don't cross modules via virtual interfaces (COM, plugins and such)

The auto-release pool approach is a very good pattern to let me write more natural and correct code with less memory management hassle than C++, though slightly more than a full GC.