Apple Introduces Swift

Apple today announced a new programming language for their next version of Mac OS X and iOS called Swift.

The Language Guide has more details about the potpourri of language features.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

i'm really pissed off

that nobody listens to me at work when I say we should use Option/Maybe, but now of course everybody will be swooning over them in Swift.

(i jest. but only a little!)

(because yes, of course how easily it is integrated into the syntax does in fact matter, so Swift is probably better about it than it would be in Objective-C or C++.)


anybody know from where/whom it descends?

At least from Twitter, it

At least from Twitter,
it seems to be

Here's a more definitive

Here's a more definitive source: Chris Lattner

I started work on the Swift Programming Language (wikipedia) in July of 2010. I implemented much of the basic language structure, with only a few people knowing of its existence. A few other (amazing) people started contributing in earnest late in 2011, and it became a major focus for the Apple Developer Tools group in July 2013.

The Swift language is the product of tireless effort from a team of language experts, documentation gurus, compiler optimization ninjas, and an incredibly important internal dogfooding group who provided feedback to help refine and battle-test ideas. Of course, it also greatly benefited from the experiences hard-won by many other languages in the field, drawing ideas from Objective-C, Rust, Haskell, Ruby, Python, C#, CLU, and far too many others to list.

real world vs. ivory tower / toy

On the one hand I do believe that all those things are required to make a *real* language. Like the compiler ninjas. On the other hand I always day-dream that maybe somebody smart (e.g. VRPI or something) can come up with the Magic Gordian Knot Solution Abstraction that is small and tight and performant and wonderful, and we'll get the albatross shaped boat anchors off of our freaking necks.

Every time I look at Dan

Every time I look at Dan Amelang's Nile/Gezira I am stunned at how succinct it is.

albatross shaped boat anchors

This interpretation of the Swift icon is one I cannot unsee. Thanks.

the Magic Gordian Knot Solution Abstraction that is small and tight and performant and wonderful

Something like Arthur Whitney's K language? :)

K is small, tight, performant (even interpreted), and does a good job with collections processing.

I'm a little envious. Awelon Bytecode (ABC) is not nearly so parsimonious or performant. If K also simplified concurrency, security, equational reasoning, streaming, and composition, I'd strongly consider using K instead of ABC. Fortunately, I do have a few ideas on how to achieve compactness and celerity (involving conventional stream compression (deflate or lzham), a global dictionary for common subprograms (ABCD), secure hashing for linking and separate compilation (as capabilities), and dynamic compilation or JIT ...).

True Machine Intelligence

Maybe if we get true machine intelligence which can write all the programs for us.

But somehow, I expect we'll end up with a bunch of sloppy machines who will have exactly the same discussion as we will.

The computer is your friend

As I've likely mentioned somewhere or other on LtU, I was told in 1986 that PL design was a dead-end field because in a few years all software would be written by optical computers, and I replied that in that case they'd be wanting a really good PL to write it in.

I do think many characteristics of intelligence commonly thought of as human peculiaries are really more intrinsic to true intelligence. I suspect even the size of short-term memory (seven plus-or-minus two chunks) may be some sort of optimum.

PL design is a dead-end field

in 1986 that PL design was a dead-end field

I've observed that's often the argument used a lot by people advocating their own PL, or research.

It's a complex error-prone time-intensive field where an extraordinary large number of languages simply fail. The problem that nobody is really good at writing a PL, well, except for the die-hards of Go, indicates how desperate the situation is.

Which is why we need more of it.

Bret victor showed us that

Bret victor showed us that there are still lots of things to be done, especially if we focus on experience and not just the language aspect of PL.

Hey, this programming

Hey, this programming language thing is becoming mainstream.

Why did they have to mess with the selector syntax?

I really loved the Smalltalk selector syntax and they seem to have gone the MacRuby route. If they really needed to use a dot why not do something like:

object.[sel1: v1 sel2: v2]

Hey, sum types...

I had basically given up hope on seeing sum types in an industrial language, so it's nice to see that ML and Haskell have had some impact. :) Slightly more seriously:

  • The whole thing looks like a conventional statically-typed single-inheritance OO language, plus tuples, first-class functions, and sum types --- basically Apple's version of C# or Java.
  • One nice (absence of a) feature seems to be the apparent absence of any exception mechanism. I presume they did this make integration with ObjC easier, but exceptions are basically the wrong thing so I appreciate their absence. I do wonder what happens when you divide by 0, though -- the manual doesn't say.
  • Overall, it's not too bad, except for the bits connected to the sadder parts of the Apple standard runtime (I mean, reference counting? In 2014?)


what should people do instead? (not what does swift do, but what do you see as the right answer in general?)

Sum types.

I think sum types are the right answer in general. Most uses of exceptions are to return an error value and I think the right way to return a value is using the return value :-)

There are clearly situations where exceptions are more convenient. But I believe that you can stick with sum types and get almost-as-convenient solutions in many of those situations.

If you're writing a throwaway script, it's nice to just assume everything is going to work and let an error just terminate your script. Using sum types an having to case on every return value is tedious. A compromise might be to use partial matching. For example

let (Right db) = openDatabase()
let (Right emp) = queryEmployees(db)

If something doesn't match, your program would die. Yeah, it's a little less convenient than exceptions, but on the flip side, if you're writing code that does care about the error, sum types are more convenient.

Another place exceptions are convenient is when you make a bunch of function calls that all might have the same error result. For example, when doing a bunch of read() calls that might all result in an I/O error, it's convenient to have them all throw IOException and handle them with a single catch.

But I think this can be partially mitigated if you can "return" from a function using a named continuation.

fn parseFile(in: InputStream) -> result: Either {
    fn readLine() -> String {
        match in.readLine() {
            Left(err) => jump result(Left("I/O error: " + err))
            Right(line) => line

    s = readLine()
    s = readLine()
    return Config(...)

Finally, things like division by zero or other "java.lang.RuntimeException" things I think are better handled by just terminating the task/process (which in some cases is the entire program, or in a language like Erlang might just be a small component).

BTW, I'd be interested in any other examples of how exceptions are more convenient than sum types. I'm trying to convince myself that you don't need exceptions but I need more data.

Not sum types.

I don't think using a sum type for division is appropriate. I'd rather just see an assertion inside the division function that the divisor is nonzero. If you want to be fancy, you can support static discharge of these kinds of assertions with dependent / refinement types, but I don't think that would be in scope for Swift.

Deciding how to handle assertion failure (or a missing case encountered during pattern matching) without exceptions is another question. You could fail the entire program or have some mechanism for compartmentalized failure.

Same page already?

Yeah, I was saying that division-by-zero (and other "program bug" errors) could be handled by just killing the task/process.

Partially completed operations

How will you rollback partially completed operations if you just kill the process? I really can't see it working. Consider something simple (and often used as a example) like airline seat booking. You reserve the seat before taking payment (as you don't want someone to pay for an unavailable seat), then you take payment, but something unexpected happens in taking the payment, you get a division by zero from some library you did not write and don't understand why it would even be doing division, and the program terminates... Now you have one unbookable unpaid-for seat.


You have the same problem if the customer just walks away after reserving but before completing the transaction, don't you? This is why the seat is usually only reserved for a short time period (20 minutes say).

Timeout is an exception

Yes, and surely timeout is most neatly handled with an async timer exception. So at the beginning of the transaction you indicate the timeout, and you don't need to handle it in any of the code inside the transaction.

Exceptions not required

Timeouts can be handled by an event handler or can just be handled the next time somebody tries to book a seat (just treat any timed-out seat reservations as unreserved). I'm not sure that an exception buys anything at all here. Where would it be thrown from? In your example, the process has already aborted due to divide by zero.

Functions not required

You do not require functions to write a program, but they help organise it for readability and maintainability. The argument for exceptions is the same, it helps you organise the code so it is readable and maintainable.

Asynchronous exceptions, like out-of-memory or a timeout do not need to be thrown from anywhere. For example:

try for 20 seconds {
    wait for message
    process message
} catch (any exception including timeout or divide-by-zero) {
    ... undo ...
} or commit.

Nice linear readable code. Event handling async code like JavaScript and nodejs are very efficient, but a pain to read and maintain with all the callbacks.

I dont understand your comment about divide by zero, just because it can be thrown, does not mean it will always be thrown. The timeout is independent of this (and in the example above the timeout would automatically be cancelled on success, or if any other exception is thrown).

For something like airline booking it is vital that resources are reclaimed as soon as possible. Waiting 20 minutes before you can try and re-book the seat would be unacceptable. And you cannot reclaim on the next booking because you have no idea the previous booking failed... in fact if the program terminates nobody may be able to book at all.

Time is not exceptional

If an exception is not thrown, is it still an exception? Your proposal looks more like a simple event to me - e.g., no stack unwinding.

I'm not sure you do want to release the reservation earlier in the case of a crash (divide by zero). As a customer, I would like to be able to re-try the transaction again (if I retain any trust in the system) in the knowledge that my temporary reservation is still valid. It would really anger me if I reserved a seat, the system crashed, and then I discovered that my seat had been sold to somebody else in the meantime!

It would unwind the stack

If an out-of-memory exception is not thrown is it still an exception? Of course it is. It seems like you are not considering asynchronous-exceptions at all, and are only thinking about synchronous exceptions. Async-exceptions are well known, and are definitely exceptions (perhaps more so than 'thrown' exceptions as there is no other way to handle them).

In order to get to the "Catch" cleanly it would have to interrupt the current computation and unwind the stack back to the place it was in at the 'try' - otherwise it simply would not work.

The difference is an event returns to the point it was in the computation after the handler completes (like an interrupt), whereas the exception does not, in this case it is clearly an exception as we do not want to resume the transaction that has timed out.

The point is not that the seat reservation is held or not, but that the system is reset to a sane state. For example as part of the payment process maybe a record is deleted, or record altered as part of a set of changes, but the error occurs half way through, so the state is neither as it would be before the reservation, nor as it would be afterwards, but inconsistent. Without a transactional approach with exceptions, you may be left with a permanently unbookable seat, and you may not be able to fix this later in future booking attempts, because you have lost to context and state information of the operation that was interrupted half way through (is this an incomplete booking or an incomplete cancellation?)

In any case the system would only 'crash' if the exceptions were not being caught. In a multi-user system it is possible for one users reservation to fail due to division-by-zero, without affecting any of the other concurrent reservations happening on the system, although looking at the comments suggesting programs just terminate on errors, you would think not.


There are lots of ways to implement transactions without exceptions, just like there are lots of ways to implement timeouts without exceptions. E.g., look at how Erlang handles this - spawn a new process for the request, if it errors then kill the process and signal a watchdog process to clean up (and handle timeouts the same way).

The question of context is an interesting one, but any contextual information that can be captured by a try/catch block can just as easily be captured when spawning a sub-process. In this case, the relevant context is likely to be an undo-log of actions to reverse on error.

I'm not particularly against exceptions, but I don't think they offer any particularly unique advantages, and they also seem to lead to mixing error handling into control flow unnecessarily.

Exceptions by another name

launching a process which joins back to another process (that watches for incomplete execution of the first process) is semantically an exception. The new process has a new stack, so when it terminates, that stack is unwound and all resources held by the process freed. The watch dog is the catch that can do any clean-up.

Not quite

The exception model can be implemented in this fashion, but other models are also possible. E.g., the watchdog process does not have to be defined at the point where you spawn the sub-process. While an exception approach couples the exception handling code to the call-stack, in a sub-process and event system they are completely decoupled and can be defined independently. You could e.g., implement a publish-subscribe system for handling errors - sub-processes publish events when they die, and watchdog processes subscribe to the ones they are interested in.


Sounds reasonable, but it is implicitly a multi-task thing. If I rename publish to throw, and subscribe to catch where is the difference for sequential code.

So perhaps I need to refine my statement, and say they are semantically equivalent in a sequential environment.

I still think it would be neater to use the exception model, as there is less risk of missing a failure after you start the task but before you subscribe to the errors. You would have a form of race condition.

There are more differences -

There are more differences - e.g., even in the sequential case I can have more than one subscriber for the same event (one for logging, one for cleaning up, and another for emailing the customer to apologise). I can't remember when I last wrote a purely sequential program though - probably in BASIC.

Race conditions are possible, but easy to prevent as for any other code (e.g., ensure the watchdogs are started first, or buffer events until they are ready, etc).

Good for Parallel

This seems good for parallel use. I probably need to think about this some more. I don't see every function being run in a separate task, but I do like the Erlang model. I think a timeout as an exception is nicer than a process killer, and there us no other nice way to make sure the process ends.

For example you cannot guarantee a request to terminate will be obeyed by a deadlocked process, and killing one process from another is bad practice. The timeout exception seems the best way to safely guarantee termination (or handling the error).

We need exceptions to handle async errors, and there is no way around that.

Paraphrase: "I don't like

Paraphrase: "I don't like exceptions, so let's introduce concurrency!"

Now you have two problems.

Badly paraphrased

Firstly, I never said I was against exceptions, only argued that they are not the *only* workable design. Secondly, every non-trivial program has to deal with concurrency anyway, so I doubt such a scheme would be 'introducing' it. In a language built around concurrency (like Erlang), it is a natural way to deal with errors.

Sum Types + Monads = Exceptions

When you have to write reliable software, which can fail due to runtime conditions (lack of memory for example), you need a way to ensure operations either complete or do not change persistent state, equivalent to database transactions. Just terminating a program without undoing partially completed operations is a bad idea. Relying on programmer anticipated errors is a bad idea.

If we allow a universal error 'Either' type for all functions that can fail (because we don't want to have to specify a list of possible failures in a functions type signature), and then we use a Monad to thread the computations together so we don't have to have the case statements make the code unreadable, then we have just reinvented exceptions.

So if the concern is about implementation, it can be pure-functional and use sum types and monads to implement, but the concept of an exception seems a good thing for code maintainability and readability to me.

I'm not yet claiming that

I'm not yet claiming that exceptions can be completely replaced with sum types or something else. It's just that exceptions have certain undesirable properties, so I'm wondering "ok, let's say we get rid of exceptions, can we still get by?"

(BTW, I'm also interested in practical/ergonomic issues for a general-purpose programming language, not just theoretical equivalence or whatever.)

One of the "undesirable properties" that make me want to find an alternative to exceptions is that the availability of exceptions forces you to decide which cases to cover with exceptions and which to cover with the return value. If your language has sum types, the decision is even tougher because of the overlap.

(Where you stand on when to use a sum type vs an exception? For example, for functions that parse text, the "input parse error" case seems like a good candidate for sum types. Things like out-of-memory are definitely trickier. Also, what you think of Java-style checked exceptions?)

When you have to write reliable software, which can fail due to runtime conditions (lack of memory for example), you need a way to ensure operations either complete or do not change persistent state, equivalent to database transactions. Just terminating a program without undoing partially completed operations is a bad idea. Relying on programmer anticipated errors is a bad idea.

Hmm, do you have a more concrete example? As I said before, I'm definitely looking situations where sum types seem to be inadequate (with the ultimate goal of figuring out how to make them adequate :-)

All Programs

Really, all useful software needs this property. I used airline seat booking above, but even a paint program needs it, otherwise if any operation fails (out of memory when applying a filter to an image) you lose all work since the last save.

As for when to use return values and when to use exceptions, lets put aside implementation issues (for example many exception systems are a lot slower than returning values, so that influences when you would use them).

Perhaps what is needed is a uniform way of handling this? How about two return statements instead of one:

return x
throw y

This encodes the result in a sum type. Then, how about some kind of case statement to detect the response:

try {
} catch(a) {
} catch(b) {

Finally how about if you do not have a case for a particular failure it immediately returns the current function, and returns a failure of the same value, so common failures can be propagated up without boilerplate code. We have just re-invented exceptions (except perhaps with a cleaner implementation).

However you still have the problem of deciding what is a failure and what is just a different success value. Programming is full of these kind of decisions.

Edit: If designing a language from scratch, I would probably chose not to have try/catch as separate syntax from the usual case construct. I am undecided about whether multiple return statements is a good idea, but it seems such a common thing that declaring a datatype for each function seems prohibitive. This leaves one big problem, and thats propagating return values that do not match a case expression... that requires the return type of the outer function to depend on the return type of the inner function. If we need to do IO as well, this would need to be monadic, and we have just re-invented the exception monad again.

When to use exceptions

My current answer is that exceptions should be used only to indicate bugs, kinda like assertion failures. The program should contain many throw/catch/finally statements to make sure things are going well, but if anything is actually thrown in production, the programmer should analyze the stack trace and fix the program so it doesn't happen next time. If a thrown exception doesn't mean that the program should be fixed, then it probably shouldn't be an exception.

Event Handling?

In the C++ parser combinator library I wrote, because the user of the library determines the return types of parsers, I have used exceptions for parse errors, so that the errors are reported correctly independently of the user-defined return values. To me this makes sense in that as the thread of control descends recursively deeper into nested parsers, the code deals with the normal flow... this leads to linear and readable code. Then we can have a single top level catch to deal with displaying diagnostic parse errors.

This seems a good use of exceptions to me, it improves code readability, and hence maintainability, and separates concerns, so that the parse diagnostic is kept separate from the continuation of parsing, and the extensive error reporting code is kept in one place.

Parse errors

I have taken exactly this tack in my (imperative) parser library, and for the same reasons too. Now I am faced with porting the library to Swift, and want to do things in a Swiftian way, rather than be too literal with my porting. I asked for comment in another thread (Parser error handling without exceptions), to which you responded helpfully.

Somehow your — perfectly plausible — rationale is lost on commenters who opine, e.g. "I can't understand why anyone would ever use exceptions", or "Which part of 'exceptional' did you not understand?" Furrfu!

my take as well

In practical programming, there are lots of times where you want to tear down a large chunk of the program without exiting the entire program. Exceptions give you that, and without exceptions, I'm not sure what else you do.

I agree that exceptions are often over-used, including in some of the fiercely defended examples in these comments.


I think my point is more that returning an Either type, and threading control flow using a monad, is effectively an exception. So if two things are semantically equivalent, why not use the neatest syntax?

Using monads changes the type

As far as implementation strategy goes, it doesn't really matter whether you use a sum or a continuation to implement error handling -- it's six of one, half-a-dozen of the other.

The payoff with monadic (and similar) error-handling schemes is that they change the type of computations which can have errors. As a result, programmers must handle errors at any point they want to ensure no errors can escape from. Without that type discipline, they may handle exceptions at points they want to ensure no errors escape from, but the compiler won't complain if they forget.

In principle, it should be possible to do checked exceptions in a programmer-friendly way, but in practice there aren't actually very many languages that do -- Daan Leijen's Koka is pretty much it, as far as I know.

My issue with this is the

My issue with this is the apparent assumption that it is desirable that errors are handled close to the point of their occurrence - that there should, or could, exist some kind of subunit of code from which errors can't escape. I don't think such subunits exist, and that exception handling is a global application responsibility.

It's my experience that errors should very rarely be handled near the point of occurrence, and that the vast majority of code should pass all errors through to the caller, after localized cleanup. In try/catch/finally terms, finally should outnumber catch 100s to 1 - ideally there should only be a single catch or equivalent.

In server applications, that's normally in the request handler with logging; in GUI applications, it's in the event loop; and in command-line apps, it's in a wrapper in main(), normally a reason to terminate with message on stderr - typically file not found, couldn't connect, etc.

Error handling that's more specific than this is usually not actually error handling.

This means that checked exceptions are a fundamentally misguided and unwanted feature for most practical programming.

(Add in lazy evaluation, however, and my opinion changes.)

Representations for error handling

I've posted about representations recently and I don't remember if it was in the context of error handling, but IMO this is the way to go: at the high level we have values that don't generally involve concerns of memory management or exception handling, and then we have a facility for building representations of these high level values in terms of lower level values that do worry with such things.

Then, at choice places in the program, we switch to the lower abstraction level. That is, instead of telling the compiler "call foo", we instead say "find a representation foo and call it" where the selected representation will either compute a representation of the value foo evaluates to or will fail in some way (throws an out-of-memory exception, maybe). The place where we do this serves as a boundary for compartmentalized failure without polluting all of the high level code with error handling logic.

This is superior to pervasive use of exceptions because while exceptions can syntactically hide the presence of exceptional cases, they don't semantically hide the presence of exceptional cases. This approach allows separation of concerns: get the high level values right and then handle exceptional cases correctly.

Maybe and Either

I have to say I love Haskell's handling of this via the Maybe and Either Monads. Either when you care why it failed and Maybe when you don't. Mainly you don't pick extra control structures, you just write the code as if everything was going to work and bind handles the details of passing exceptions up the stack.

I'm not sure what could be better. So put me down in the Sum types work well camp.


I cannot imagine writing a type-checker (I've written a lot of those lately) without exceptions.

If you want to write a single-pass algorithm that recurses on the structure of the AST, you have to handle the case of (e.g.) an undeclared variable. So, you can either (1) return Some(result) | None from all (usually several mutually-recursive) functions, or (2) return just result and handle an error using an exception.

Case (1) will incur an (IMO unacceptable) syntactic and cognitive overhead, as you will be matching the return value of every function and possibly have to return prematurely. It also goes against the DRY principle. On the other hand, exceptions (case 2) do exactly what you want - abort the whole computation to an arbitrary handler higher up the call stack, with no coding overhead whatsoever.


I said more or less the same thing about parsing further up. You want to keep the recursive code clean and linear and have extensive error reporting code all in one place. I do the same thing in type-inference having unification throw an exception. In this case I catch the unification error in the type inference algorithm and augment the information in the exception with some context from the type-inference so far before re-throwing. It seems to result in more readable and maintainable code to me.

I've never seen a type

I've never seen a type checker written with exceptions, even with languages that had easy access to them, and had lots of multipass issues. Scala for example uses 2 passes (namers and typers) as well as a bit of lazy evaluation. On symbol not defined, an error symbol is returned, no need for options at all, and for error recovery that is even undesirable!

I can see how toy type checkers would throw because they only report one type error at a time, but not a production compiler.

Interesting point

Actually the argument for a parser seems stronger as you will always want to stop parsing and report the error at the top level, remembering that parser failure is not an error unless you make it so (parsers can succeed, fail or throw an exception, so recoverable errors just backtrack and try something else).

For type-inference I could see little point in continuing unification after an failure. You could continue to infer garbage types, but they would not be informative nor help debugging as far as I can see.

Is type-checking that different? I suppose it is, as I do the post-inference checking in an embedded logic language, so it backtracks on errors rather than having exceptions, and I have deliberately kept the dialect cut-free.

If by garbage types you mean

If by garbage types you mean "error", it's just like reaching bottom and continuing to propagate that. Sometimes you can still infer useful types given an error typed argument or type variable, many type errors are quite local!

I've never seen a parser use exceptions for error recovery either. Actually, exceptions are really meant for exceptional behavior, and syntax/semantic errors are not exceptional at all.


You can have a look at the parser combinators here:

and the type inference here:

So now you can't say you have never seen it anymore :-)

Surely its a matter of definition though. If I propose a language feature that allows non-local error recovery, and that happens to be semantically the same as exceptions, it would be good design to provide a single mechanism for handling that. In effect exceptions are just an Either type and the Error monad (implementation details aside), so what's the problem with using that for non-local error reporting?

Exceptions pop control, and

Exceptions pop control, and you still have work to do after the error occurred that get left under the table because the stack is unwound. You can't really recover from a throw (abort compilation, basically), unless you have some kind of retry semantics to go along with it.

Assumes you want to recover

That assumes recovery is sensible for useful. When a parser encounters a symbol not in the grammar, I want predictable behaviour, I dont what the parser to try and guess some possible interpretation... Debugging needs predictability, and should not be a lottery.

What sensible behaviour could you achieve in a parser or type inference. You certainly cannot compile the program and produce output. Is there any evidence that debugging syntax or type errors is easier if you recover in some way (anecdotal or otherwise)?

Error recovery usually means

Error recovery usually means outputting an error and skippy output. Alternatively, you can abort the tree. If your parens are balanced and/or your statements/expressions are clearly delimited, then the extent of a parse error can easily be contained.

Eclipse IDE/Scala presentation compiler

I heard an interesting paper in this direction is this one: Searching for prorogued programming on Google Scholar suggested other relevant results.

A few more thoughts from my experience follow.

Eclipse reports errors right away, much before the compiler completed; this relies on typechecking incomplete code, and uses a special variant of the early phases of the Scala compiler itself — called presentation compiler.

Moreover, it will try to report all errors — the one you want to fix might not be the first to be reported.

In practice, people might even try refactorings (which depend on typing informations) while the source code is not fully correct — a boring case is when you're halfway writing a new definition which is not yet used, say, but you want to use some method which you need to extract out of an existing one; a more complicated one is while you're editing a definition which is used.

Early error reporting is often considered important to avoid interrupting flow (in the psychological sense, see Wikipedia). That's especially important with a compiler as slow as Scalac.

At least for Java, Eclipse used to go further, and I guess it still does — it enables you to run a program with errors (IIRC after some confirmation dialog), but if you run the offending code it will throw an exception (which you can totally break on in a debugger). I hear anecdotical evidence suggesting that makes things easier for some users (who maybe would otherwise use a dynamically typed language) — in particular, this helps reduce the "action at a distance" issue (see papers on cognitive dimensions, which focus on issues for end-user programmers).

Grumble grumble.

No body.

Common Lisp Conditions and Restarts

I have fond memories of recovering from syntactic errors during a build with MCL. It uses Common Lisp's Conditions and Restarts mechanisms. A compiler error results in a restart dialog. When the dialog popped up the usual practice is to open the file, fix the error, and resume the build from the dialog. The dialog provides a range of options: restart the compile of this file, restart the build from this top level defsystem, restart from the beginning, or cancel the build altogether.

Ironically, Apple bought (and nearly destroyed) MCL decades ago.


This seems nice, and can be done with exceptions by catching at the file level. Could also be done with Either return value. The only way that gives you problems with this is just exiting on errors.

This suggests to me that calling a error-print function and exiting is the least flexible method of error handling. What we want to do is populate some datatype that describes the error and propagate it up to the appropriate level for handling, whether this is done by exception or return value. My personal take on it is that the code in the levels between where the exception occurs and is handles is more readable and neater with exceptions as the processing can be kept as linear sequence, with early termination and propagating upwards handled implicitally.

Question about unification

I am trying to understand how unification like this works. Lets say I have: "2 2". So I start with: fun-type = "Int" and arg-type = "Int", and I try and unify: "unify fun-type (arg-type -> beta)". Obviously unification fails trying to unify "->" and "fun-type", so set both to bottom _|_, however the type of the function remains beta, which is wrong.

To make this work do I have to substitute all variables on both sides of the unification with bottom?

Right thread?

This is the thread about Swift, which is a garden variety OO-with-a-splash-of-functional industrial language, not anything that contains the features or properties you are asking about. :)

relevant to previous answer

Its relevant to Sean's point. The discussion started with observations about exceptions, starting with Swift, which does not appear to have exceptions. The discussion moved on to what exceptions are good for (and hence what swift might be missing out on). I posted an example of parsing and unification that used exceptions, and Sean responded with a description about how it can be done without exceptions.

So the question is about how you would properly implement unification in Swift (and other languages without using exceptions).


didn't see the relevant context above; just checking to make sure it wasn't an errant post.

Carry on :)

That's correct

In code following the erroneous function application, the "result" of the application can be used in other ways (e.g. (2 2) + 1), so that beta might get unified with another type. If it does, the compiler has more information about your error ("2 has type int, but a type int -> int was expected").

Type-checkers and exceptions

OCaml's type inference engine uses exception to report typing errors, see the file in the sources. Even if its error-reporting strategy is simple ("fail at first error"), in-place unification still produces partially typed trees that can be used to understand the cause of the error.

How does this fit into the

How does this fit into the O'caml IDE experience?

reference counting

what should be used instead? seriously. :-)

Apple seems to have an allergy to garbage collection

Perhaps on mobile devices, that makes sense--iGadgets tend to try and minimize RAM consumption, as maintaining DRAM requires power, and RC (when it works) can reclaim unused memory faster than many GC implementations.

Doesn't make sense on mobile

I don't think it makes sense on mobile either. RC comes with substantial overhead as well, in space and time. And that it can reclaim memory faster than "many" GCs is not a real point -- you wouldn't pick one of those, of course. In fact, it's much worse wrt fragmentation and memory locality, because you cannot compact.

It's just one of these shortcuts that many languages take just to regret it later.

Which GC would you pick?

And that it can reclaim memory faster than "many" GCs is not a real point -- you wouldn't pick one of those, of course.

I'm not aware of any GC that could compete with RC on the "speed" of picking up garbage (essentially, there is no garbage in memory with RC). I've studied quite a bit of GC literature, but less so recently, so if I'm missing something, please let me know!

RC is slower than mark/sweep

Reference counting is about 30% slower than mark/sweep see:

Note where it says: "... reference counting is almost completely ignored in implementations of high performance systems today.".

Reference counting is about

Reference counting is about 30% slower than mark/sweep see:

In throughput. Mark-sweep is slower in latency. Which is preferable depends on your application.

Reference counting has

Reference counting has excellent latency characteristics while its overhead is easy to optimize. It can also be applied to resources, whereas GC is too late. It's not clear that GC can provide a real energy advantage, but it sounds easy to test...


When a reference count goes to zero, you have to free the whole subgraph pointed to by that reference. If you do this eagerly, you'll end up with a pause proportional to the size of the object graph you're freeing. To guarantee good latency, you need to put the object on a queue, and then free a bounded amount of memory in each logical time step. But then you lose the guarantee about when a resource will be freed!

So refcounts can either have good latency, or can manage resources reliably, but not both at the same time.

False dilemma, IMO. With RC,

False dilemma, IMO. With RC, you at least have some direct control over the latency. When collecting doesn't noticeably affect latency, you would just let RC do its natural thing. When collecting DOES noticeably affect latency, you opt-in with a little queuing on an as-needed basis. Hence, you can (theoretically) get the better of both worlds.

Conversely, if you have latency issues with a mark-and-sweep GC for example, you are either just stuck, or you require MUCH larger program transformations than a bit of said opt-in queuing...

In practice

Emphasis is on "theoretically" here. In practice, the GC analogon of Greenspun's rule applies.

No emphasis needed for

No emphasis needed for certain apps. In real-time software like games, latency is considerably more important than throughput.

They provide special

They provide special operators that have defined behaviour on overflow. Regular operators "report an error" on overflow. Not sure what they mean, they talk about handling it so it might do something like raising a signal.

There is also support for defining custom operators with custom precedence. Not sure this is good..

Apart from this it looks like Swift could finally bring the gospel of variant data types, pattern matching, and 'a option to a lot of developers. Good stuff!

Division by zero

The overflow-division operator will return 0 on division by zero. I would have thought max-value for the type (with appropriate sign) would have been more sensible. Likewise I would like to have seen saturaring-operators as well.

Bottom propogation

seems to be part of the language--invoking methods on nil returns nil rather than raising an exception, at least if you use the ?. selector for your method calls.

Objective-C has had this

Objective-C has had this since day one, in an interesting departure from its Smalltalk roots.

FWIW, it's been a part of

FWIW, it's been a part of Objective-C since almost day one.

what happens when you divide by 0

Having downloaded Xcode 6 beta, integer division by zero raises a SIGILL (which the REPL will handle). The compiler can catch an obvious integer division by zero for you. Float division by zero evaluates to +Int, NaN, or -Int.

Overflow divide by zero

If you use the overflow divide by zero it defines x/0 = 0

Question on the "tuple" type

The linked-to manual is unclear on this point, but are Tuples in Swift essentially Cartesian products, or small fixed-size heterogeneous aggregates? In other words, is a single element Tuple essentially a scalar, or not? And can one have Tuples as elements of other Tuples, or does that flatten out (or is disallowed)?

The common purpose of Tuples seems to be things like passing back multiple return values (and Swift has something called "Void" which is an empty tuple), and for doing things like compound or destructuring assignments. On the other hand, there is a selector syntax for tuples (".0", ".1"); that appears to be different than the square-bracket notation used to retrieve array elements. Also, there is this note:

Tuples are useful for temporary groups of related values. They are not suited to the creation of complex data structures. If your data structure is likely to persist beyond a temporary scope, model it as a class or structure, rather than as a tuple. For more information, see Classes and Structures.

It isn't clearly indicated whether or not Tuples are first class (or even second class); the examples show local variables being given Tuple type, and Tuples returned from functions, but there's no example of a Tuple either being passed to a function or being a property (attribute) of a data structure.

Anyone know?

they are all right

It seems fairly clear that there aren't any single-element tuples; (Int) is just Int. (Thank goodness.)


As far as I can tell, the language provides no built-in support for concurrency or parallelism. Does this mean you need to rely on library mechanisms like Grand Central Dispatch?

The section on String Mutability is also confusing - does assigning a string to a variable make the string mutable or is it just the variable? The docs imply the former, which is slightly odd.


The next paragraph (Strings are value type) implies that strings are immutable; the example above is phrased in term of string mutability, but I think it actually only describes mutable variables.

I'm interested about concurrency too. If the language had a good concurrency story (and runtime), it would probably be strictly better than Go technically (but proprietary, which makes it a no-go). It doesn't seem to be the case.


I find the language in these sections very confusing. It says that strings are value types. Then it says that they are always copied (why bother if they are immutable?). Then it says "You can be confident that the string you are passed will not be modified unless you modify it yourself." which implies that it is possible to modify the string yourself.

The section on Mutability of Collections seems to further muddy these waters. The described behaviour and definition of "immutability" seem slightly non-standard to say the least! Am I missing something?

Strings have value semantics

it appears.

It also appears that COW may be implemented, so that many string operations can be done with aliasing rather than copying, and many updates can be done in place rather than by producing a new mutated copy.

Arrays and dictionaries also appear to have value semantics as well, though the pages from Apple made it appear that dictionaries are copied across function calls? I find that hard to believe.


CoW and in-place update optimisations would make sense given that they are using reference counting - it is trivial to determine if the string is aliased or not. Tcl does the same. I always found that a mixed blessing in Tcl as introducing a new procedure call or another variable might suddenly drastically change the time and space complexity of apparently equivalent code if it causes the reference count to be bumped.

This passage about arrays troubles me, also:

Immutability has a slightly different meaning for arrays, however. You are still not allowed to perform any action that has the potential to change the size of an immutable array, but you are allowed to set a new value for an existing index in the array.

So, immutable for arrays means "fixed size but mutable".

concurrent aliasing?

I like CoW and refcounting quite a bit, and scatter-gather composite strings using refcounted iovec fragments is good enough for me.

... it is trivial to determine if the string is aliased or not.

But under native-thread concurrency, there's no good way to ask "do I hold the only reference?" because reading the refcount is a race, and you can see a value of one just before it changes to two. So if refcounted objects are shared among more than one native thread, you can't optimize with update-in-place, based on whether a string looks aliased, because we can only see the past and not the concurrent present. (If that's not what you were thinking, I apologize.)

It's easy enough to patch a fragment written with a newly consed refcounted replacement. Then if parts you discard really only had a single ref, and were not being aliased concurrently, they'll go back to the heap for near future re-use. There's copying but little coordination, as long as each thread has an unshared wrapper for "the string" that refs the refcounted fragments.

Concurrent RC

That's true. Tcl uses a strict message-passing (copying) approach to communication between threads, so ref-counted structures are never shared between threads.

I wonder what Swift does in this situation, as they are silent on concurrency issues generally.

We do it with elves.

I suspect the average Swift user is not expected to worry about concurrency, but conjecture of that sort belongs in the discussion thread about politics. Someone needs to make concurrency work though, even if devs using Swift think about it seldom. Most of my comment is about making things work, and not much about Swift.

(Here I will say thread to mean native-OS-thread, and fiber to mean single-threaded-coroutine. I have a single-syllable word I use to mean named set of sibling fibers in one single-threaded logical process, but it won't come up here.)

The Tcl strict-copying style is sane and reasonable, and a good fallback position when you try more tricky sharing for performance reasons in some localities. I prefer strict copying via messages between threads, but immutable shared refcounting between fibers.

The last paragraph of my first post above actually describes how lightweight process fibers stream i/o to each other in immutable refcounted mbuf style. (For accounting purposes though, each process should conservatively count itself sole owner of shared space, or else it's hard to kill on out-of-memory.) Patch-based CoW prevents affecting other fibers when writing to edit streams.

The main reason I don't want to refcount between threads this way is because I want faster local single-threaded heaps, so a per-thread runtime does its own independent space accounting. This means I don't want to free space allocated by one thread in another, and would prefer to copy just to avoid that. Basically I want to treat other threads like remote servers that just happen to be conveniently local.

(The main exception is a thread pool to service blocking system calls on behalf of async fiber operations we don't want to block the original calling thread, since that would block all hosted fibers and not just one caller. A fiber parked waiting for reply effectively pins the space passed in call args, so it doesn't need more refs when serviced by a thread pool. Killing a fiber's host process can't free the space until async replies resolve. Basically there's a ref associated with an async call, and the thread pool is an implementation detail.)

I can see why they don't talk about it in Swift docs, because concurrency is complex and Swift aims to make things simpler than ObjC.


You can guarantee you hold the only reference by arrayname.unshare() which does nothing or copy to guarantees you the only reference.

if single threaded, sure

I'm sure you're not intentionally ignoring the whole point of multiple threads, but it reads like that. If you're citing docs for unshare(), either they assume one thread, or they're lying — or wrong, which is slightly worse. I'm guessing that library doesn't support concurrent use.

I implemented a similar sort of unshare() method myself, in past copy-on-write support. It only works safely when there is only one thread involved, or when you devote at least one bit to say "has ever been shared with another thread" where you pessimistically assume it's always shared no matter what the count.

Edit: it's not that easy to let another thread get the next ref without there already being at least two refs, unless you're cutting corners dangerously close, or doing something goofy you really shouldn't be doing (like giving access to the refcounted parts without locking the non-refcounted parts). It's not worth arguing about, unless you want to find the bullet proof answer when there's no more margin for error. Because I care, I make the mistake of assuming others do; sorry about that.



You are making a good point. Apple's preferred procedure (NSOperation) for handling a situation of a mutable array where some operations are being done by threads is to copy the array off to an immutable copy in the main thread and then pass the immutable to the threads for computation. (effective a multi-threaded unshare)

1) Initialize the operation with an immutable copy of the data it needs to do its work.

2) Execute the operation; while it's executing, it only operates on the data it was initialized with and on data that it creates for itself and does not share with other threads.

3) When the operation completes, it makes its results available to the rest of the application, at which point the operation itself no longer touches those results.

link to Apple's example on arrays . So the unshare happens before the threading happens. There aren't a bunch of coequal threads ever in Apple's model.

Swift will come up with some recipes too

Thanks for the followup. That sort of immutable safety dance matches my expectations better.

How would this happen?

Ive been thinking about your scenario a bit, and I'm not sure how the data race would happen. Assume that you have the only reference to the object, so the ref-count is one. Now, what needs to happen concurrently is that a new thread starts with access to the object without yet having updated the ref count and your thread mutates the object.

What I can't see is how the new thread starting will lazily get access to the object reference. Building the closure for the new thread (whether manually or with some kind of language support) that enables the thread to access the object reference should increase the ref count before the original thread has the possibility to also mutate the object.

Maybe I'm missing something obvious here.

scenario like this one

The scenario I have in mind requires manual refcounting (usually in C), not proportional to pointer aliases, so you can see a pointer to a refcounted object without holding a ref to it already. It may also require heavy use of borrowed refcounts, meaning you know someone must be holding a ref, or you wouldn't see a pointer passed to (say) a function taking a pointer. It doesn't require a new thread start, only that existing threads pass around pointers when a dev believes, "All I need to do is take my own ref to ensure this stays alive."

Let's use a C dev named Jim to make this less abstract, so I can talk about a reasonable line of thinking he may use, unaware of risks. Jim thinks CoW prevents him from accidentally mutating a shared object, so he's sure it can't happen. (He thinks there's a proof of this, but hasn't really attacked the premises.) But Jim hasn't written all the code; some code was written by Joe, who has the same belief. In thread A, Jim passes a pointer to x to a function, which makes it visible to another thread B, but in a way that gets confirmation from thread B after it picks up its own ref, so there's no race that would stop both A and B from each safely holding respective refs in a handle. But Joe writes a piece of code in thread A that checks whether it is safe to modify x when the ref is only one.

There's a window in time around when B adds the ref, that if Joe's code runs at just the wrong moment in A, which isn't waiting on B in any way, it sees a one just before B adds the second ref. Both threads A and B have correctly arranged x cannot accidentally get freed due to a race, but they didn't arrange visibility of refcount one would not occur under race. Maybe it would have occurred to Jim if he knew Joe wrote code like that, but he doesn't ask the question because he doesn't know. There's a lot of scary code in the C world where the left and right hands don't know what each other is doing, and each part in isolation sounds harmless.

If refcounting is done with clever atomic instructions to ensure the count itself was always included for each ref added, you get the result that you can't lose a ref. But you don't get the result that you can tell what the current count is now, because each time you look it could have been under race. All you know is that if a ref was added that you know is held, the count cannot go to zero until you release yours. But all other bets are off. Basically not quite enough info is carried by the representation. Sorry this was long. Edit: if Jim merely added one more ref to represent "I'm about to show this to thread B" before making it visible, then it would be safe; but that scenario wasn't spelled out to Jim.

Got it

Thanks for the explanation. When relying on manual handling then I fully agree that there are problems.

I was thinking of automatic handling where you can not pass a ref-counted reference without actually bumping the ref-count (as I assume that Swift does). In such cases I believe that it is possible to know that a COW-optimized object is not aliased even in a threaded context

Go has a concurrency story,

Go has a concurrency story, but not one applicable to every situation. By keeping concurrency out of the language, they make it a library concern, which is how things are done now anyways in objc.

checked arithmetic

I'm interested that Swift uses checked arithmetic by default. It's a pervasive problem in real software, especially in a security context. It will be very interesting what performance impact there is from this; maybe it will turn out to be quite low.

Low overhead

I seem to remember that for MLton, the overhead of SML's checked arithmetics was reported to be 1 to 2%. Which would make it a no-brainer for most application domains. Certainly much cheaper than the overhead of RC. :)

Co- and Contravariance?

I expected a form of in/out annotations since the language has classes, therefore subtyping, and supports generics.

Didn't find it and I don't get yet what they did. Uhm, all generic instances are invariant? Something like that?

Anyone here who gets it?

Benchmarks for Swift as of August 2014

Thought I'd post that new benchmarks are out.

Appears that Swift set with compiler flags that remove all the safety features (array bounds-checking, integer overflow checking, etc.) is actually faster than Objective-C! Which is a big win for the LtU crowd which have often argued that higher level syntax doesn't imply slower. With debug on and all safety features enabled Swift is currently between 20-300x times slower than Objective-C. The recommended procedure at this point is to break out small chunks of code in Swift and compile without safety features while leaving safety features on for the rest. So right now Swift offers speed or safety but not speed and safety.

This really can't last. For

This really can't last. For predictability reasons, you want your debug-enabled version of the code to run just a few times slower than your debug-free version. I'm betting that the integer overflow checks will go sometime in the future (unless a good optimization can reduce its cost).

You don't want to be debugging code that is substantially different from the code you release.

some day we'll have better tools

i mean, why doesn't somebody make a language system where you get free cloud-based abstract interpretation of your code so that you can have instant feedback from giant science fiction distributed AIs all dedicated to proving you do/not have integer overflow - statically, so that the expense goes away at runtime. sheesh.


Its doesn't need AI, just an automated theorem prover, and it doesn't require a cloud of computers, just the one with the compiler will do. Look for example at what F* can do:

Because useless

Because even if they could, that wouldn't help you to reason modularly, if the programming language doesn't allow you to accurately express the types and contracts at the module/component/library boundaries in the first place. And anything that's not applicable modularly, and only works for whole, closed programs, I ultimately consider useless.

Abstract interpretation can

Abstract interpretation can be modular. Higher-order Symbolic Execution via Contracts.


Sure, I didn't say that AI can't, but that nothing can be without somebody writing down the types and/or contracts.

proper integers

Or you could use proper integers by default like python, or at least have them available like Haskell.