Nullable type is needed to fix Tony Hoare's "billion dollar mistake".

The Nullable type is needed to fix Tony Hoare's "billion dollar mistake".
But implementing the idea is full of pitfalls, e.g., Null by itself should not be an expression.

is the terminator to mark the end of a top level construct.
An Expression◅aType▻ is an expression which when executed returns aType.

In an expression,
1. followed by an expression for a nullable returns the Actor in the nullable or
throws an exception iff it is null.
2. Nullable followed by an expression for an Actors returns a nullable with the Actor.
3. Null followed by aType returns a Nullable◅aType▻.
4. ⦾? followed by an expression for a nullable returns True iff it is not null.

* 3▮ is equivalent to Nullable 3▮
* Null Integer▮ throws an exception
* True is equivalent to ⦾?Nullable 3▮
* False is equivalent to ⦾?Null Integer▮

In a pattern:
1. followed by a pattern matches a Nullable◅aType▻ iff
it is non-null and the Actor it contains matches the pattern.
2. Null matches a Nullable◅aType▻ iff it is null

* The pattern ⦾x matches Nullable 3 , binding x to 3
* The pattern Null matches NullInteger

Edited for clarity

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Fair enough

That is a very eloquent explanation of your reasoning.

If you could write some legal code, maybe we could understand

If you could write some legal code, maybe we could understand what you are trying to say.

Otherwise, it is just guesswork :-(

In any case, a message cannot be sent to an Actor unless it's type is known.

If you could provide me with an implementation

if you can provide me with an implementation of ActorScript, I would be glad to write legal code. Otherwise, you will have to be my compiler, and return the relevant error message. Otherwise its impossible for me to guess what you might intend.

re Otherwise, you will have to be my compiler,

Otherwise, you will have to be my compiler,

"have to"?

You are getting lots of feedback that tells you with various degrees of frankness that your comments are confused and that casual attempts to help you are failing.

Have to

Have to, as in if you want me to bother writing code in ActorScript at your direction. I was asked above to provide code that demonstrated the problem. I did the best that I could without an environment to test it in. You don't have to provide feedback, but then I don't have to write any code for you.

freedom hall re have to

I am happy to provide you with feedback and help understanding DL and Actorscript, at least as far as I understand them. I think Hewitt is as well.

My suggestion remains to chill out for a while.

Thats a good suggestion

That's actually a good suggestion.

No runtime polymorphism?

Does that mean you cannot send a message to an actor you know implements interface 'x' if you do not know that actual type of the actor? So for example I know I have an actor that implements "Show" but I don't know whether its "ActorA" or "ActorB" I cannot send it a message from the Show interface to the unknown actor?

Interfaces are types

Interfaces are types. So if the type of x is an interface, then x can be sent a message even if the implementation of x is not known to the sender.

What requires runtime compilation?

Answering this question here: Above you say "compilation can happen at any time"... this requires runtime compilation and the inclusion of the compiler in the application. You need this if you are going to in-line code where the implementation used depends on IO (reading a file or user input).

Compilation can happen at any time there is something to compile

Compilation can happen at any time there is something to compile.


This is imprecise and meaningless. The sun shines whenever it is sunny. Really?

Does actorscript compile code at runtime? A simple question that only needs a simple yes/no/I don't know answer.

re imprecise

This is imprecise and meaningless.



"Compilation can happen at any time there is something to compile.", "Exhalaton can happen at any time there is something to exhale", Its a tautology.

re citation

"Compilation can happen at any time there is something to compile.", "Exhalaton can happen at any time there is something to exhale", Its a tautology.

Neither of those are tautologies.

Does actorscript compile code at runtime? A simple question that only needs a simple yes/no/I don't know answer.

Not a useful answer.

"Compilation cannot happen when there is nothing to compile"... "Compilation can only happen when there is something to compile"... Well its a trueism if not a tautology.

Sorry, as much as I try to be the un-carved block, your response is a poor one. Try answering the following questions:

- What do you understand by "compile"?
- What do you understand by "runtime"?

re uncarved block

Try answering the following questions:

I don't think the effort would help at this time.

It is impossible to compile code without runtime :-)

It is impossible to compile code without runtime :-)

At runtime not with runtime.

Compile at runtime, not with runtime. As above you seem to be avoiding the point.

All computation happens at runtime :-)


All computation happens at runtime :-)

I can't figure out what you are talking about :-(

it was a pretty conventional sort of question low on surprise

(Before compile/runtime comments, please note you, Thomas, and Keean could aim for a better class of exchanges, perhaps by act of will, where you decide not to respond to every perceived slight, even if taunts are intended. While it's none of my business, I don't see what reward comes of bickering. Just skip to the part you want say and pretend some things didn't happen. Meta points about what someone said have meager value -- including this one, so I'm done.)

Keean is drawing a distinction I assume you grasp, so pretending not to twig sounds passive aggressive. Suppose a runtime that can compile is denoted K (a process with code for a compiler), and a runtime that can only execute is denoted X (a process where compiled code runs). They can be the same process. Nothing stops you from having a process that runs both the K compilation stuff as well as the compiled output X executable (whether native code, byte code, or any other form of interpretible instructions).

There's a long tradition of separating them. Lots of tools have a compile phase that is completely different than the execute phase, belonging to different processes. (And 25 years ago, it would be a high end machine that could do both K and X in the same process, so it wasn't practical.)

Keean asked whether you require K to be present in the runtime executing X, or whether it's permissible to have a process that runs X and not be able to compile more using K code. It might bear upon security analysis (what can happen during end execution), and upon how many resources are required to run actors (whether small platforms are targets).

re it was a pretty conventional....

ActorScript is a very general way to describe computations. By the time you are talking about processes and phases and runtimes in those particular ways that you are, you have ladled on lots of vague and ambiguous contingencies and additional concepts that aren't essential.

I am guessing it results from an impulse to force ActorScript to more resemble familiar things than it does, but that's just a guess.

pretending not to twig sounds passive aggressive.

I'll keep that in mind but I don't think it applies here.

Practical concerns

I am concerned with how I might implement a language like ActorScript. I am interested in the practical concerns of ActorScript, not describing general computations (well only in the way of describing computations to the computer).

As for making ActorScript resemble familiar things, those things have a history of performance, anything new has to prove itself better in order to be worth adopting. At the moment I see no advantage to getting rid of the compiler, only disadvantages. These are the same disadvantages interpreted languages have, even JIT compiled ones.

JavaScript is not a replacement for C, C++ or Ada. If the target for ActorScript is to replace JavaScript then fair enough. In the end ignoring the problems by claiming actorscript is new and different does not make the problems go away. Language design is all about compromises, and one set of compromises is never going to be suitable for all applications.

ActorScript must have at least the performance of C++, Java, etc

ActorScript applications must have at least the performance of those written in C, C++, Java, etc.

Requires a separate compile phase

All the languages with that level of performance have a separate compile phase. Also grouping Java with C/C++ does not make sense as Java falls short of C/C++ performance. Ada is another language with C/C++ like performance (if you turn off range checks and other runtime features).

Java applications can sometimes be more performant than C

Java applications can sometimes be more performant than C applications because of additional capabilities of Java that facilitate managing more complex applications.

Unlike ActorScript, Java comes with some inherent overhead. There is nothing in ActorScript that forces applications to be less performant than C.

Ada is a better example

Anecdotal i know, but Java applications are slow resource hogs in my experience. Every time I have deployed server systems using Java they have required far more resources than other languages. Of all the application platforms Java has caused us the most problems with slow applications and excessive memory use. Node.js in JavaScript is better than Java for services (mainly due to the huge amount spent on optimising JavaScript, and the event based IO model preventing threads waiting on IO using memory).

Another clear example is to compare Android performance (Java) with iOS performance on the same platform (objective-c), or using a native-c library on Android. In fact we can always rewrite the 'C' program to be have better performance than the Java, if we have enough time, or experience. Generally I get frustrated with 'C' having to write yet another list of type X, but then using C++ Standard Template Library is the solution. I find the C++ STL to be the best collection library of any language I have used, but then it was written by Stepanov so it's expected.

As for ActorScript if it has a garbage collector then it won't be as performant as 'C' (not due to the collection, but the extra indirection to allow relocation). If it doesn't have a mechanism for explicit static polymorphism like templates or type-classes, then it cannot have zero overhead abstractions s, unless it has really good partial optimisation. However the compiler us never as good as the programmer at knowing which things to partially evaluate, so althogh performance will approach that of templates/type-classes, perfection is unobtainable, and it will always lag behind, sometimes significantly so.

For me C++ and Ada are the best compromises as they have the performance of C, but facilitate managing larger applications. Ada wins on overall management, readability and structure, but C++ is a little more flexible for meta-programming, and zero overhead abstractions, with its template system.

If you have not looked as Ada, I highly recommend learning it, its from the Pascal family, so everything is much more controlled and safer by default than 'C'. Leaking memory is much harder, access restrictions make dangling pointer impossible without explicit coersion. So really all the apparent advantages of Java, yet if you turn off runtime checks (which C/C++ don't have) it can be as fast.

Ada didn't make the grade :-(

Both Hoare and I consulted on the development of Ada. Unfortunately, it didn't make the grade for reasons that have been discussed in the literature.

Ada as a user

As someone who has programmed real world projects in C/C++, Java, and Ada, I can tell you that Ada is not that bad. In some ways its the best of the three. C++ wins on popularity, but Ada Generics are a far more controlled tool compared to C++ templates (although neither match Haskell's type-classes). Ada 2012 has significant improvements over earlier versions, for example dependency injection with generic signature packages actually works now thanks to partial instantiation. Personally I think adding the object-oriented features to Ada was a mistake (to try and compete with C++/Java) and it should have focussed on getting Generics and ADTs using modules right. It still has some annoying limitations, like no return by reference, which makes implementing efficient containers hard.

What do you like about Java, or what do you think it does better than Ada?

Java has traction, which Ada totally lacks

Java has traction, which Ada totally lacks.

New VM frameworks that generalize the JVM (e.g. from IBM) can be much more competitive.

Popularity and Performance.

So you are judging the language by popularity, not by any measure of its feature, capability or performance. That's okay, I can understand that, but popularity (beyond the fact that open-source compilers are available and there is a reasonably sized community) is not something I am concerned with.

The JVM can be more competitive, but its never going to compete with C/C++ (even though IBM provides an AOT compiler for Java). I thought the answer's here: did quite a good job of mentioning all the standard well known issues, and saves me re-typing what is already well known here.

Traction causes investment which is only way to compete on perf

Traction causes investment which is only way in the long term to compete on performance.

The arguments an Stackexchange on the superiority of C++ are fallacious.

Benchmarks and real world performance

So, the fact that Ada outperforms Java kind of makes the traction argument wrong, and C++ clearly outperforms Java.

The measurements are there for all to see. You saying Java is faster, does not make it suddenly speed up on my computer. I prefer empirical testing to baseless speculation any day.

Currently no good benchmarks for complex many-core applications

Currently, there are no good benchmarks for complex many-core applications :-(

Complex Many Cored Applications

Are there any good complex many-core applications yet? Most seem to top out performance at 8 to 16 cores due to shared memory issues. All I can say is that I write benchmarks for the specific use-cases I need, and evaluate possible languages.

For my monte-carlo simulation engine, production of random numbers is important, so I benchmark the exact PRNG algorithm. I benchmark the simulation itself etc. Unfortunately this stuff just does not parallelize (because the information gained from one simulation about the best input parameters or the next simulation means that you can just throw away any other parallel simulations).

Sophisticated many-core applications

There are commercially important sophisticated many-core applications.

Unfortunately, the ones that I know of are proprietary :-(

Complex many-core applications require garbage collection

Complex many-core applications require garbage collection.

Continuously adjusting reference counts is far too inefficient and error prone :-(

Of course, newly invented garbage collection technology will have to be used in order not to be slowed down by indirection and pauses.

Garbage Collection

Any GC that avoids indirection and pauses will require modification of the CPU architecture. Whilst this is an interesting problem, I am more concerned with implementing things efficiently on the current CPU architectures, like the Intel Core2.

I think using the type system (linear alias types) to track memory usage and ensure it is freed is the best option for safe memory management without runtime performance penalties.

Linear alias types are inadequate for many-core applications

Linear alias types are inadequate for many-core applications.

Compaction provided by garbage collection more than makes up up for indirection cost. Pauses can be eliminated without modification of x86 architecture.

Linear Types and GC.

Do you have a reference about linear-alias types, as that's something I would be interest in reading what the problems are with multi-core applications?

As for the GC arguments, compaction does not make up for indirection on modern CPUs due to the cost of memory access, and the ability of high performance caches to hold data from different parts of memory. Effectively the cache hides memory fragmentation, but the extra size of pointers due to indirection is a problem. Besides, real life experiments do not support your claim.

Pauseless GC is certainly interesting for real-time use, but I don't think its the cause of the significant performance difference, which I think is due to the extra indirection. Still seems like the kind of GC algorithm to go for if developing a language with GC.

Compaction significant for many-core perf: caching and paging

Compaction significantly improves many-core performance because of caching and paging fragmentation.

Linear alias types

What is wrong with linear alias types?

Modern GC

Typically modern GCs don't have any indirection cost when accessing an object as they re-write references during compaction rather than go through any indirection table

Rewriting many references must be slow

If there are a lot of references, rewriting must be slow? How do you find all the references in the code?


With generational GCs generally it only needs to looks a subset of the memory space. They're generally pretty smart nowadays.

Reverse References

Generational is based on the lifetime of the objects, so the pool for the short lived objects only has to check locally, but sweeps of the older objects could still result in references anywhere in the code. If you don't want to scan and abstractly interpret the code, you are going to have to store reverse-references in the objects listing the code addresses from which they are referenced. All this is extra memory (and main memory access is slow), So where you have large arrays of objects the additional cost of maintaining the back-references (which need to be in variable sized storage) is going to have a large cost.

Reference references

They don't use reverse references, they use write barriers with fast paths to detect changing references. For roots they use stack maps or in some cases like MPS are conservative on the stack. Old object sweeps are generally rare (if we trigger a gen 2 sweep more that a couple of times in a few minutes we investigate)

I'm not saying GCs are perfect, and for high performance computing you need to know what they are doing, but in general I've actually seem more issues in production from C++ heap fragmentation using basic malloc/free in inefficient ways than I have GC issues. Recently high performance allocator replacements like Google's ones effectively run GCs, but at the page level instead to avoid fragmentation, so you can still get collector pauses etc. Lesson is, to be fast, you have to understand and control allocation whatever the method you use.

Understand and control allocation

Well, I agree with understanding and controlling allocation. A common pattern I come across in my work is a collection type that contains a lot of objects, where we have functions that have a lifetime less than the collection (this is really common). In this case C++ style RAII is optimal. The collection allocation is controlled by a stack based handle, and all operations on it can use plain pointers (no ref counting).

ActorScript does not impose any inefficiencies

ActorScript does not impose any inefficiencies:
* ActorScript does not impose the use of garbage collection although it allows its use.
*ActorScript does not impose an inefficiencies on implementation types, which can always be in-lined
* JavaScript suffers greatly because it has been decreed to forever be sequential (although workers are allowed using an inefficient interface)
* ActorScript does not have insecure casting as in C, C++, Objective C, Ada, etc.
* Both C++ templates and Haskell type-classes are difficult to use and lack generality.

Lack of Generality

Glad to here that GC is not mandatory. What would explicit memory allocation and freeing look like in ActorScript?

I still thin saying things can always be in-lined is missing the problem with separate compilation, but maybe you are happy requiring whole-program compilation only?

JavaScript does indeed have the problem with being sequential, but it can be surprisingly efficient as a web-service processing more transactions with less resources than Java using multiple threads. One of the keys is that because it is sequential all IO has to be done using events, so there are no blocking operations. Other languages that supply threads, then rely on threads for all parallelism, and have blocking IO, which causes servers to have to run thousands of threads to keep up throughput, which uses huge amounts of memory for all the thread stacks.

- You don't have to use insecure casting, but as I don't use it in C++ or Ada, I won't miss it in ActorScript.

- I find lack of generality to be a poor criticism of C++ templates, that are so general you can meta-program using them. If anything they are too general and uncontrolled. C++ templates are actually a turing-complete functional language. Haskell type-classes don't have the "you can meta-program anything" problem, as they are type-safe, but they do in fact form a committed-choice logic programming language, in which you can write logic meta-programs. So I don't see where the claim of lack of generality comes from.

Generality of ActorScript

In ActorScript, everyone can compile their own programs :-)

If x is of a type that supports freeing, then Free x will free x.

Haskell type classes are much less readable than parameterized types in ActorScript :-(

Edit: clarified freeing.

What you are used to?

I think you are missing the point of whole program compilation. In langauges like C++ / Ada / Haskell, you can compile only part of the program to an object file. There is a separate linking phase for joining object files together. This is done because it takes a long time to compile a large program, and you don't want to have to re-compile the whole thing just because of one small change. Because of the requirement to just recompile one file, you don't necessarily know all the implementations of an interface when the user of that interface is recompiled. If ActorScript supports separate compilation, then not everything can be in-lined.

How do you allocate memory? I presume that's just actor instantiation, but then are arrays (because of the variable storage size) a special case? How do I implement the 'free' message on an actor I am writing. For example how do I free a Tree made of Forks and Leaves (definitions as used in earlier examples).

I am more used to Haskell type-classes, but I quite like parameterized types. I think its too early to see which I prefer. I need to write more ActorScript first.

Compiling top-level definitions separately is fine

Compiling top-level definitions separately is fine.

But, it is only possible to inline an implementation type if the implementation is known. If an implementation is changed for an inlined message send, then the message send may have to be recompiled.

Interfaces and implementations

How do you think ActorScript would break into separate files? Do you think something like a module system is needed where you specify which definitions are exported or imported into each file? Should all definitions be public, or one public definition per file like Java?


Most C++ linkers nowadays can do link time inlining. That doesn't remove the point about dynamic plugin loading though


How do you deal with register substitution? When inlining you don't want to create a new stack frame, so you will need to adjust the stack frame of the calling procedure, and change the register/stack addresses in the called procedure. Is it able to do as much as inlining in the source code?


I'm not sure of the exact mechanisms, but I believe the effect is as good as if you have the source code. Both GCC and MSVC have been doing this for a few years now.


I have been using GCC "lto" for a while now, so I should have put 2 and 2 together. Looks like it is dumping the internal representation to disk, so .o files are not really object files (in the old machine-code sense). In which case it makes sense that it can in-line in these circumstances, but obviously not with shared-objects (DLL, .so) or plugins.

Ada has done this for much longer with its 'Elaboration' of generics, which can happen at runtime in some systems. I wonder if GCC takes advantage of lto when elaborating Ada generics (as GCC does all elaboration at compile time not runtime)?


Double post.

more negotiation that force-fitting resemblance

My siblings gave me a useful early education, which I didn't appreciate at the time. While my brain was still malleable, I went through hundreds of iterations in the "Are you trying to make me angry?" game, with three brothers and one sister all very close to my own age. (When I hit kindergarten, I had three younger siblings, because Irish Catholic parents didn't practice birth control.) Getting angry is a complete waste of time unless energy can be channeled. Competing too is something of a waste of time. Sibling rivalry in a big family is a troll fest. Kids in small families these days lack proper development of immunity to childish behavior.

I'll keep that in mind but I don't think it applies here.

It's always an interesting question whether acting dumb is done on purpose. Smart folk almost always twig after repetition, because they re-examine things missed at first. So in very bright people, I don't accept a position of: "I haven't the vaguest idea what you are suggesting." It's just petulance unbecoming in scholars when gravitas would be more effective.

By the way, thanks for your clear response, which lets me know where you and Hewitt are coming from. I inject vague contingencies and concepts as part of negotiation to explore what folks mean. I simply do it in good faith, sticking with ideas I expect get less vague when pursued and refined.

re It's always an interesting question whether acting dumb is do

It's always an interesting question whether acting dumb is done on purpose.

It is easy for a serious person to take pains to avoid raising or resolve the question about himself.

cognitive strategies

We've got two different cognitive types mismatching and thus talking past each other. One type (which Hewitt apparently has) uses, as a primary strategy, massive data collection (such as historical); tending to view problems as specific complex puzzles to work through by means of their details. Another makes heavy use of intuitive grasp of underlying principles; for which, massive data collection is neither sufficient nor necessary to grok the problem. In some areas of research these two types can be complementary (though alas foundations of mathematics is not so much an area of that sort). Even in areas where they can be complementary, though, people of those types are apt to get on each other's nerves, and perceive each other as missing the point. I'm obviously not in a position to give advice, having myself simply given up on bridging the gap; but perhaps you may find it useful to keep in mind that the cognitive mismatch is deep and works both ways, without requiring ill will on either side.

Please add to site operating discussions

What you have just described is a beautiful observation of a very general situation. For example an argument that I had with a colleague a few days ago could easily be described as an instance of these two cognitive strategies talking past one another. This pattern is general enough that it describes a large proportion of the times that LtU has hit the right margin in ever decreasing font sizes.

Perhaps a reminder of this point would make a good permanent addition to the site docs?

Compiler is required in order to do compilation :-)

Of course, a compiler is required in order to do compilation.

The question is whether ActorScript imposes any limitations inlining implementations.

Both extension and discrimination are needed in a modern PL

Claim: Both extension and discrimination are needed in a modern programming language:
1) Discrimination is needed to abstract arbitrary types
2) Extension is needed to extend a type with additional capabilities

Of course, an implementation also needs to be able to use other implementations

Discrimination vs extension

It seems to me the functionality of a discrimination is a subset of the functionality of type extension. I don't see the need to have both. Extension seems the better mechanism as it has extra type-safety.

Edit: Maybe my terminology is a bit wrong. What I mean is that there is an interface for say a Tree, and you extend that with a Leaf and a Fork. Leaf and Fork have to both implement the interface messages, but on a different type. This works effectively like a virtual function. So you have a 'Tree' but you don't know whether it is a Fork or a Leaf, but you can call 'getHash' on it in any case. Dispatch of getHash work like a type-case calling the correct underlying type.

A discrimination is also a type-case where you want to branch based on the type. The easy way to see this is to consider the interface:

Interface Tree with 
    isLeaf[] |-> Bool
    isFork[] |-> Bool

Actor Leaf[terminal:t] extends Tree using
    isLeaf[]:Bool -> true
    isFork[]:Bool -> false

Actor Fork[left:Tree, right:Tree] extends Tree where
    isLeaf[]:Bool -> false
    isFork[]:Bool -> true

Now you can use an 'if' statement to match over type of tree-node, like you would with a discrimination.

You wouldn't actually do it like this though, you could move the branches of the discriminations case statement into the actors.

This duality between type-case statements and virtual functions, is reflected in the duality between discriminations and interfaces. Seeing as discriminations can be implemented in terms of interfaces, then you either don't need them, or they can be simple syntactic sugar for the relevant interface.

I think ideally 'case' pattern matching should conform to an interface allowing user defined matching. So that:

case t of
    Leaf -> ...
    Fork -> ...

is effectively syntactic sugar for:

if t.isLeaf[] then ...
else if t.isFork[] then ...

Discrimination between Integer & Account: no common messages

A discrimination between Integer and Account requires no messages in common between Integer and Account. Also, the discrimination only supports conditional downcasting; it doesn't have any of the messages of Integer or Account.

The tag is an actor.

In a discrination you have to test the type tag, to tell if it is an integer or an account. You can replace the tag with a type, and a simple interface to query the type.

isInteger / isAccount messages represent querying the tag in the discrimination. Let's say the tag itself is an actor that accepts these messages.

What is the tag, if it is not an actor?

As for down casting, I think having an interface with virtual functions is better than casting. You don't need any casting at all (casting is bad remember).

If x is IntegerOrAccount, then x⧁Integer is conditional

If x is of type IntegerOrAccouunt, then x⧁Integer is conditional and throws an exception if x is not IntegerOrAccouunt[n] for some integer n.

Throwing an exception

Throwing exceptions is not hard:

if -| x.isInteger[] then throw TypeError

So the discrimination is not adding functionality you cannot otherwise achieve.

Alternatively, the Integer actor implements "isInteger" returning void, all other actors do not implement it. Then simply calling: "x.isInteger[]" will throw an exception if its not an integer.

ActorScript does not impose any performance hit

The point is that ActorScript does not impose any performance hit.

What performance hit?

What performance hit are you talking about?

What code do you think requires distributing a compiler

What ActorScript code do you think requires distributing a compiler?

Wrong Thread

This question is in the wrong thread. We were discussing the necessity of supporting both interfaces (and type extension) and discriminations here.

Type extension is fundamental; discrimination is for cobbling

Type extension (both for an implementation and to extend an interface) is fundamental. Discrimination is for cobbling together two otherwise unrelated types.

Related by discrimination

The fact that two types are in a discrimination relates them. Discriminations can be implemented using an interface type and type extension, therefore the core language only needs to support interfaces and type extension. Discriminations can be added as syntactic sugar by the parser, or be converted into interfaces and extensions by an early pass of the compiler.

A discimination ssignature: no signatures of its discriminants

A discrimination signature: no signatures of its discriminants.


Are you saying that a discrimination does not allow its discriminants to have a signature? Surely that's just a special case of a type extension where there are no messages accepted? Doesn't this also forget about the implicitly defined isIngeter, isAccount messages? Don't the discriminants have an implicit signature extending the discrimination type with "isWhatever"?

re Discrimination

What is a signature?

A discrimination of stuff can be used only for discrimination

A discrimination of assorted stuff can be used only for discrimination.

There are no implicitly defined messages (upcasting and downcasting is done by type Actors).

Caching (a la Haskell) is most useful in context of concurrency

Claim: Caching (a la Haskell) lacks utility just by itself.

It is most most useful in context of concurrency

Memoization is important in many non-parallel algorithms.

In functional languages "caching" is called 'memoization', see: and it is important for efficiency in many non-parallel algorithms.

Caching is important for performance in many concurrent systems

Caching is important for performance in many concurrent systems.

Caching is important.

Having established that caching is important for functional and concurrent languages, the conclusion is probably that "caching is important".

Concurrency is crucial. Haskell left out concurrency :-(

Concurrency is crucial in practice.

Haskell left out concurrency :-(

Haskell has concurrency

Haskell has channels for concurrency (communicating sequential processes model). What it left out was parallelism, in the sense of automatically parallelizing code, but you can change the semantics without changing the result of Haskell programs. For examole look at "purescript" as an an eager evaluating version of Haskell. Pure functional programs can be automatically parallelized.

Concurrency includes parallelism as a special case

Claim: Of course, concurrency includes parallelism as a special case.

Other way around

Its the other way around. Concurrency is explicit, so it is where you have an abstraction and explicitly write concurrent code.

Parallelism is implicit, so its about taking an expression like

2 * 5 + 6 * 7

and doing data-flow analysis so you can do the two multiplications in parallel.

So concurrency (which is explicit) cannot contain parallelism (because it has no explicit annotations). Parallelism can contain concurrency, as that's just providing hints to the compiler about where things can run in parallel, or telling the compiler explicitly this can be run in parallel.

Haskell lacks most of the constructs needed for concurrency

Haskell lacks most of the constructs needed for concurrency in a modern programming language, e.g., the ones here.

CSP works fine

Using channels for concurrency in the communicating-sequential-processes model works fine. So to say Haskell lacks the constructs necessary for concurrency is not true. It works, you can write concurrent programs, and its a nice model to program with.

Haskell: not suitable for programming general concurrent systems

Claim: Haskell is not suitable for programming general concurrent systems.

Easily False

This claim is too strong. Haskell can easily be used to program concurrent systems, and it does it well, as mechanisms like channels, and software-transactional-memory work well (much cleaner abstraction than threading in C/C++).

You might claim Haskell is not the best language for programming concurrent systems, but in order to make this claim there must be a better alternative. Maybe that's Erlang?

Transactional memory: not suitable foundation for concurrency

Transactional memory is not a suitable foundation for concurrency

Transactional Memory is Practical

Transactional memory works well as a solution to efficient concurrency, by being optimistic, (as opposed to locking which is pessimistic). So it is suitable for programming general concurrent systems. Introducing 'foundational' is shifting the goalposts from the above point. You might be right, STM does not seem foundational to concurrency, but shared memory is an important part of concurrency.

For example, how do actors model a shared database? This is the use-case for STM, and its more efficient than locking. How does AcrorScript ensure the database remains consistent with parallel access? Is this as efficient as STM?

Under high contention, Transactional Memory can thrash

Under high contention, Transactional Memory can thrash for larger memory regions.


STM can thrash, but like other abstractions, it is a tool to be used when appropriate. Haskell has other abstractions that can be used for concurrency, like channels. You may as well say, database transactions can thrash, so databases are useless for concurrent applications. This is obviously false, just look at all the airline seat booking applications out there.

I'm not convinced that Hoare is really to blame

having a zero not be a valid pointer value and using it to mean:
something other than a value are pretty obvious. I didn't know Hoare put null into ALGOL W, is there any reason to believe that K&R C wouldn't have "#define NULL 0L" without ALGOL W?

I've often implemented a typed null/a singleton object representing end of list (for instance) - not for type safety, but in order to speed up code and make it simpler.

I think the question "should you use null" probable depends on the program. Null is the popular dependent type or algebraic type, but much more popular than it should be.

The challenge is to do better than Null

The challenge is to do better than Null for a distinguished out of band value for a type.

In some cases

a singleton is definitely better.

For instance, a double linked ring list with a no-value link at the beginning/end is generally better than null ended double linked list.

nil/null has other problems in other languages.

In some languages nil AND false have to stand in for not-true. That's pointless disaster for speed and optimization.

To mention something obscure, nil in lua is worse than other languages, because it puns to mean "delete this element" and putting it into an array can convert an array into a sparse array at run time. I think that was a bad choice.

Nullable◅String▻ in double-linked string list

In a double-linked list of strings, Nullable◅String▻ can be used to accommodate the absence of a string.

Not-nullable Challenge

Isn't the challenge to have not-nullable types by default, so a pointer to an int always has to point an int to be valid. Then you can selectively introduce nulls where necessary, but it is explicit in the type when you have to, and you also have to explicitly construct/deconstruct the nullable value. This last part is important, it means you cannot just treat the nullable value like the not-nullable value, so for example, to add two not-nullable ints:

add :: Int -> Int -> Int
add x y = x + y

Now for nullable,

add :: Maybe Int -> Maybe Int -> Maybe Int
add (Just x) (Just y) = Just (x + y)
add _ _ = Nothing

Note you cannot write:

add :: Maybe Int -> Maybe Int -> Maybe Int
add x y = x + y

unless '+' is defined on "Maybe Int".

A Nullable◅Integer▻ is *not* an Integer

A Nullable◅Integer▻ is *not* an Integer.


"Nullable<T>" is not "T", in Haskell "Maybe T" is not "T". At least in Haskell I don't have to work out how to type "small circle inside big circle" to use the values :-)

In IDE, (()) is keyboard macro for ⦾

In IDE, (()) is keyboard macro for ⦾


Carl, this has the potential to be an interesting topic. However, a few things might make your post more useful or more likely to lead to an informed discussion:

  • Syntax: An explanation of the syntax. Not only does ActorScript in general seem to use a lot of syntax that is unfamiliar to most people (a link to the ActorScript paper would be helpful here--an explanation of the syntax even better), but the syntax you're using in your post appears to differ from that in the latest ActorScript paper on HAL.
  • Semantics: Some kind of explanation of the intended semantics of your nullable types. The examples you give provide some kind of flavor, but they are somewhat terse and cryptic (additional exposition might help), and don't really explain the general rules under which Nullable types operate.
  • Context: An explanation--presumably in terms of semantics--of how your Nullable types differ from the usual Nullable or Option types found in many other languages. And what the perceived advantages of your proposed semantics are.
  • Actor Model Integration: An explanation of how Nullable types fit into the Actor Model. I guess this is also semantics, but specifically in terms of the Actor Model. In the past you've made it clear that "everything is an Actor". So if Null isn't an Actor, then what is it? And how does it interact with Actors?

You may also find it helpful to look over Rule 4 of the LtU policies.

Thanks for the helpful suggestions

I learned something important from this LtU discussion and as a result ActorScript has changed by making nullable part of the language and not just a kind of discrimination :-)

The next update on HAL will have the new syntax.

I could post the meta implementation of Nullable, Null, and ⦾ here, but that might not help much.

PS. I added some explanation to the posting at the beginning of this LtU topic.

Formatting messed up

The formatting of the topic is all messed up now, and I can't really read it anymore.

I find it very hard to type all that unicode on a normal keyboard, I would want a programming language to stick to only common keyboard symbols. Fancy Unicode is fine for maths papers, but its gets in the way of writing longer programs.

I am interested in actors to experiment with them, but I don't think I will be using this notation.

Further I don't see any difference between nullable and a normal discriminated union, I don't see any reason for making it special syntax, in fact I prefer Haskell's way of declaring datatypes and just allowing the language user or libraries to declare a 'Maybe' type if they need one.

Making 'null' return an exception seems semantically odd. Either something is an exception or it is not. The Nothing in the Maybe type is just a method for implementing exceptions (with no message).

I think Codd has more to say about Nulls than Hoare. A null represents when no value is known. For example if you have date of birth, but you might not know the date of birth of a particular person, you can use a nullable type and set it to "Nothing".


this topic is the one left that I CAN read the comments on.

Others are way to wide for the screen, I have to scroll back and forth to read a single sentence.

You'd think that a web site for smart programmers could change a program to display properly. Set a maximum width for the page and a minimum width for posts (some are also too narrow to read) - and display some sort of set of lines at the left to represent the indentation that can't be shown as more space.

About null being no value I'd also suggest other kinds of "no values" for instance whatever to signal a sullen refusal to answer the question. A my_opponent_is_a_well_know_wife_beater value for a tu-quoque refusal to answer etc...


I was referring to the Unicode symbols in the OP being messed up. I think it happens sometimes when you edit posts with Unicode in.

Missing explanations

I can only read the post if I mentally throw out uses of ▮ and Expression (there's still too much guesswork left, but at least I have some consistent model). But what's left is extremely boring, compared to all alternatives described. And the post does suggest that Expression is interesting — "Null by itself should not be an expression".

I still wonder what ▮ means. The post doesn't explain Nullable◅aType▻ either, but I suppose that syntax is the equivalent of Java's Nullable — that is, application of type constructor Nullable to type argument aType.
Also, I'm not 100% sure what Expression means — is that a reflective construct, as I'm guessing? What's the introduction form for it?
I'm also wondering about the precedence of the various operators. If ▮ introduces Expression, how does ⦾Nullable 3▮ parse? However many parentheses I introduce, there's something missing.
In short, I don't have enough information to reconstruct the parse trees and typing derivations of the presented expressions without too much guesswork.

I agree on not posting implementations here.

▮ is the terminator to mark the end of a top-level construct

▮ is the terminator to mark the end of a top-level construct.

Nullable◅aType▻ is like Optional◅aType▻ in Java.

⦾Nullable 3▮ parses as ⦾(Nullable 3)▮

An Expression◅aType▻ is an expression which when executed returns an Actor of aType.

Next time, it would be much

Next time, it would be much better if you used <code> tag to separate text and programming code/math, like so:

true, false, null

You can even use them inline, like x and y. No need for bold font and tiny black boxes.

Also, it would be much clearer if you used standard, non-unicode notation, like @Null and Expression<T>.

Finally, you can also make lists,

  1. both
  2. numbered


  • unnumbered
  • as
  • well.

Finally, I think it would be useful to explain concepts such as "Actor", to get everyone on the same page.

<code> tag doesn't preserve indentation :-(

The HTML code tag doesn't preserve indentation :-(

Lists are a good idea :-)

pre is good for code blocks

Use <pre> for blocks preserving whitespace indent (but avoid long lines that ruin Drupal formatting for LtU), and use <code> for inline code. Also remember to change each literal < you want visible to &lt;, and each literal & to &amp;. By convention we use <blockquote> for extended quotes, but that also works for other things you want offset in another style.

<code> doesn't preserve

<code> doesn't preserve indentation, but <pre><code> does.

<code><pre> is, "of course", invalid

Higher Order Functions

How can you have higher-order functions in ActorScript?

Procedure is a interface type in ActorScript

Procedure is a interface type in ActorScript.

Too terse

Thats too terse even for me :-)

Procedure is an interface type, e.g., Procedure◅[ ]↦Integer▻

Sorry. Procedure is an interface type, e.g., Procedure◅[ ]↦Integer▻ is an interface type extension of Procedure.

Ascii Actors

I have also been thinking about syntax with actors, as I don't get on well with the unicode, typing it is slow. I was thinking of something like:

interface Tree<t> {
    getHash[] -> Integer

actor Leaf<t>[terminal:t] extends Tree<t> {
    getHash[] -> Hash<t>.[terminal]
    getTerminal[] -> terminal
actor Fork<t>[left:Tree<t>, right:Tree<t>] extends Tree<t> {
    getHash[] -> Hash<t>.[left.getHash[], right.getHash[]]
    getLeft[]:Tree<t> -> left
    getRight[]:Tree<t> -> right

I think now I need to translate some code into actors and see how it goes.

Keyboard macros for ActorScript


The above are the keyboard macros for ActorScript except for parameterized types which use use the obvious encodings for ◅ for and ▻ in order to avoid ambiguity with greater than and less than.

Interface Tree<|t|> with getHash[] -> Integer

Actor Leaf<|t|>[terminal:t] 
  extends Tree<|t|> using
    getHash[] -> Hash<|t|>.[terminal]
    getTerminal[] -> terminal

Actor Fork<|t|>[left:Tree<|t|>, right:Tree<|t|>] 
  extends Tree<|t|> using
    getHash[] -> Hash<|t|>.[left.getHash[], right.getHash[]]
    getLeft[]:Tree<|t|> -> left
    getRight[]:Tree<|t|> -> right

Why so many characters?

What's wrong with using "<" and ">" for type parameters, like Java and C++ do... why "<|" and "|>"? Do these symbols have some common meaning? I am not aware of their use mathematically, so why not go for the simpler option?

Why capitalise keywords, its an extra shift key to type, and it doesn't really add anything?

It looks like you are using whitespace (the indenting) to determine the end of an actor definition. I much pefer the '{', '}' method of marking blocks, and it is easily recognised from many languages (unless you are using curly-brackets for another purpose?)

The inconsistency of conjunctions is also odd, why are some things "with" and others "using"? It reminds me a bit of Ada. The beginning curly bracket can stand in for all these different keywords.

In the end, I am not looking to get you to change the syntax you use for ActorScript, but if I were to play with writing an actor compiler I am thinking about the syntax I would want to use. So really just looking for you to confirm its sensible and would work, even if you like it less :-)

Expression languages are the future

Expression languages languages are the future; curly braces are for sets (as in mathematics).

Sets should be a library

I don't really think different collection types should have special syntax in a language. I think collections should be in libraries, and should use the interfaces provided for the user. User defined collections should be first class.

Sets, Maps, and Lists: standard in a modern programming language

Claim: Typed Sets, Maps, and Lists should be standard in a modern programming language.

standards are double-edged

(This is a "yes, but" reply where I agree, but add more constraints, then disagree without the new extras. In short, they should not be standard unless done in a particular way: so language users can undo some of the standard decisions and replace them when they cause conflict.)

General purpose data structures end up baking in a lot of decisions about edge cases and resource allocation. Slightly different flavors of implementation can fight against certain usage patterns, essentially becoming grossly inefficient in some cases. For example, suppose you cannot control where space is allocated by a collection; for some usage patterns, this prevents leveraging a golden opportunity because you know something about time and space patterns not expressed in the type system.

So what I like to see is collection types built atop a library of components arranged in a bottom-up design, where you can mix and match policy and mechanisms in different ways. But this forces you to make a lot of decisions, when the api requires each nuance to be explicitly invoked. So another layer atop that can bake in a set of standard decisions, minimizing details needing explicit concern. This no-sharp-edges interface is probably the one most folks want, until a dev finds abysmal performance in a use case that's fixable by altering one of the standard decisions.

A language is simpler if you can't make decisions: good for newbies, but bad for fluent devs who want control over particulars because it improves performance by a constant factor (like two, or five hundred).

It's a tradeoff like most others. Code is easier to understand when nuance is impossible, so one-size-fits-all, preventing both quite bad and quite good practice. But that just forces a dev to go around a language to get better results, making code harder to understand than it would have been if it fit into what a language supports.

Interfaces for mathematical Sets, Maps, and Lists standardized

Interfaces for type parameterized mathematical Sets, Maps, and Lists are quite standardized.

In addition, extensions and alternatives in libraries can be provided.

sorry in advance (re interfaces for math sets)

Your approach to types seems intentionally generic.

(Get it? haha.)

Not generic

Actually it's very special case, not generic. There are standardised interfaces that can operate over all collection types including sets, maps and lists, and that's iterators. See Stepanov's "Elements of Programming".

Standard in a library.

Yes they should be provided as standard, by the standard library, not hard coded into the language core. Dealing with such things should not be in the compiler.

This design of including containers in the language is typical of scripting languages where it is needed for performance.

I think the core language should provide the tools necessary to define any collection efficiently, not just maps, sets and lists, and should be done generically.

Are the lists bidirectional? Are the lists contiguous? Are the maps ordered? Are the sets ordered? There are actually many varieties of each of there containers and you don't want to build all that into the core language which should be as small as possible.

re built-in sets, lists, and maps

Type sets, lists, and maps are basic building blocks of formal mathematics. (Unfortunately, these foundational concepts are often not taught well in undergraduate math courses.)

Sets, lsits and maps abstract inessential details of computational representation precisely because they describe the mathematical structure many different representations have in common.

One style of program that merits special attention writes against generic interfaces for sets, lists, and maps. Of course, more than one implementation of those interfaces can be available for use.

That style of program merits special attention because:

(a) It tends towards mathematical clarity by hiding inessential implementation details.

(b) It tends towards mathematical lucidity by using world-wide familiar mathematical terminology and concepts.

(c) The span of programs which can be performantly written in this style is very large and tends to grow over time.

Lastly, a case can be made that sets, lists, and maps deserve a distinctive notation, much as algebraic relations and functions get a distinctive notation.

Sets, Maps and Lists

I would argue that only the 'Array' deserves special attention, as all the others can be implemented in terms of it.

I also think that having so much in the 'core', for example depending on set theory, rather than defining it, does not lead to any kind of clarity, as you cannot see the definition of a Set, Map or List in the language.

Having these built-in is only necessary in interpreted scripting languages, due to the inefficiency of the interpreter. With a compiled language there are no performance advantages to having these built-in, and they have to be defined somewhere.

It is better that the definitions of Set, Map and List are visible, in the source language, in a library, than hidden inside the compiler. You want the compiler itself to implement only the necessary primitives.

Another problem is the representation of these things, sets for example, may be optimal for one task but not for another. Take disjoint-sets and the union-find algorithm. Implementation details of this (whether to use path-compression and weighted-union) can make such huge differences in algorithm efficiency that a smart phone using a good algorithm can outperform the worlds fastest super-computer using a bad algorithm. Of course the problem is that the optimal data structure used to represent the set depends on the use case of the data, so one representation can be excellent for one application, but unusably slow for another.

Building these things in might be great for mathematics, but it's not good for practical computation.

re sets, maps, and lists

It is better that the definitions of Set, Map and List are visible, in the source language, in a library, than hidden inside the compiler.

This is an example of a comment that is hard to reply to because it is such a strange non-sequitor.

The distinction of what is library, and what is compiler, is something you made up and introduced here to advance an argument you are making in opposition to a position nobody has taken.

Minimal core language.

Then a reply that implementing these in a library in an implementation of ActorScript is acceptable would be good. Further a discussion about what is necessary in a cut-down 'core' language, such that these other things can be implemented as a library would be great. Finally if there were some practical way of defining new operators, and maybe some custom syntax (although the parsing difficulties need to be carefully considered) such that these operators could be defined in the library, would enable them to be removed from the core.

Advantages of standardization of Mathematics in a PL

There are huge advantages to standardizing Mathematics in a programming language.

Advantages for mathematics

Yes, maths libraries are a great ides. Matrices, Linear Maths, Neural Network libraries etc. The key thing is to work out the minimal core language necessary to support them.

Core mathematics should be standardized: Maps, Sets, Integers

Core Mathematics should be standardized, e.g., Maps, Sets, Integers, Lists.

Unicity of notation using Unicode

Unicode can be used to achieve unicity of notation to aid understand-ability.

Also, it enables an IDE to provide better completion, suggestions, and error correction.


I find the addition of so many new operators results in code that looks cryptic and is hard to read. I expect I could get used to a lot of the syntax after a while, but my concern is that this makes the language irregular and non-generic. Lambda calculus is beautiful because it only has a single operator (lambda) yet can express the whole of (sequential) computation. I find myself thinking that there is something wrong if the language needs so many operators and special cases in its core.

Useful idioms can make programs more understandable

Useful idioms can make programs more understandable by those who know the know the idioms.

Competition is the solution

Providing an alternative, and seeing which is more popular would seem to be the solution here.

Core idioms need to be standardized.

Core idioms need to be standardized.

Typing unicode

If you are working on OSX there is a character viewer that you can enable in System Settings (Keyboard) that is activated from an icon in the toolbar. It will give you a palette of unicode characters, but the window does not steal focus. Double clicking will insert the character into your current window.

Under linux you can get a similar effect if your editor / terminal is UTF-8 using the cut-buffer: build a palette of useful characters on a page and the select / middle-click to paste them into another window.

Or, you know, stick to ascii :) but maybe this is helpful.

entering unicode for DL and Actorscript (GNU Emacs)

Quick and dirty:

  (setq codes
        '(("<>" ?\u25c1 ?\u25b7)
          ("<|" ?\u25c1)
          ("|>" ?\u25b7)
          ("dot" ?\u2219)
          ("all" ?\u2200)
          ("exists" ?\u2203)

          .... add your own additional entries here ...

  (setq code-names (mapcar 'car codes)))

(defun insert-code (c)
  (let ((def (assoc c codes)))
    (if (not def) (error "unrecognized code" c))
    (setq def (cdr def))
    (while def
       ((eq t (car def)) (insert-code (completing-read "code: " code-names nil t)))
       (t                (insert (car def))))
      (setq def (cdr def)))))

(defun ide (s)
  (interactive (list (completing-read "code: " code-names nil t)))
  (insert-code s))

(global-set-key "\M-." 'ide)   ; or pick your own keybinding


  code: exists RET

inserts "∃" at the point.

IDE keyboard macros can instantly convert ASCII to Unicode :-)

IDE keyboard macros can instantly convert ASCII to Unicode :-)


I use Vim not emacs. Also I don't like being dependent on a particular editor or mode. Sometimes I write code on my mobile phone in an email or directly into a website.

re editor wars.

I use Vim not emacs. Also I don't like being dependent on a particular editor or mode. Sometimes I write code on my mobile phone in an email or directly into a website.

What kind of coffee do you like?

Yellow Bourbon Natrual

I like the Yellow Bourbon heirloom varietal, processed using the natural method (as opposed to washed). I think a lot of people underestimate the effect of the process on the coffee's flavour, when it is in fact one of the largest influences.

re yellow bourbon natural

How interesting.

More Interesting

Its more interesting than completely pointless comments about how interesting you find things. My original comment was making a point about not wanting to be limited to special editors to enter code. A far more sensible reply would have been to point me to page 405 in the book, where it gives ascii equivalents for all the Unicode characters.

re more interesting

I am trying to hint at the virtues of concision and focus.


Well I would say that introducing so many new things at once makes focus impossible.

If the post about 'nullable' had taken an existing language that was widely accepted (say Haskell or Java), and extended it with just the nullable concept, then we could have a focused discussion.

But introducing nullable, within the framework of something so many steps away does not really encourage a focused discussion.

Besides, there are human beings communicating, so some degree of conversation is allowed. Conversations follow their own path, and being dogmaic about what can be discussed is only important if you are trying to control propaganda, rather than engaging in the conversation, wherever it leads.

re focus

I am assuming that you are conversing in order to better understand Actors and, of course, to contribute ideas.

My admittedly vague suggestion about concision and focus is aimed at helping you better achieve that goal.

You are right

Yes, indeed, I wish to better understand Actors, and I also want to understand the rational behind some of the design decisions made, where those decisions seem different from those that I would make.

Haskell Maybe using Actors

Thinking about the extensible discrimination in Haskell above, and getting back to Nullable, we have:

data Maybe t = Just t | Nothing

We could translate that as something like:

actor Nothing[];

actor Just<t>[just:t] {
    just[] -> just

discrimination Maybe<t> {Just<t>, Nothing}

Which seems remarkably similar to:

interface Maybe<t>;

actor Nothing<t> extends Maybe<t>;

actor Just<t>[just:t] extends Maybe<t> {
    just[] -> just

Except "actor Nothing extends Maybe<t>" would seem to be not allowed. Why is that? As you can see from the Haskell code the interface can be seen as just an extensible discrimination, so why do that behave differently? Are both necessary, could one be dropped in favour of the other?

re Actor simulation of Haskell Maybe

See printed pages 31-32 (pdf 32-33) in ActorScript™ extension of....

The key aspects of Nullable◁aType▷ are:

1. Static elimination of "null address" bugs (forcing every possible null to be handled).

2. A 1:1 of nullable types to corresponding null values (i.e., no "Nothing" value).

Maybe Is Okay

I don't see the need for a built in. A discrimination automatically makes you handle all options, so a normal discrimination covers (1). The interface extension also prevents a stand alone 'null' you have to qualify the null with the type of nullable dealing with (2).

I don't see the need for a built in.

See printed pages 31-32 (pdf 32-33) in ActorScript™ extension of....

No Reason?

That gives the syntax for "Nullable", but does not explain why you would want a built-in instead of just defining a discrimination.

It also does not explain the semantic difference between a discrimination and extending an interface. As far as I can see you only need one of these. As discriminations are closed, extending an interface seems to be the more general mechanism, so you could just get rid of discriminations with no loss of expressive power, and simplify the language, which is always a good thing.

Nullables should be standardized and concise

Nullables should be standardized and concise to encourage their use everywhere.

Maybe not Enough

I agree that I think a great language would default everything to NotNullable and require Nullable annotations otherwise. E.g. Haxe gets it wrong.

Define in a library.

It seems to me the second definition I provided, where Maybe<t> is an interface, and it is extended by Just<t> and Nothing<t> actors provides all the desired properties, you have to cover both cases, and there is the 1:1 property. If the definition is in a standard library then it would behave just like the proposed build-in Nullable. This is how I would chose to implement it.

re No reason?

Scheme doesn't need anything other than LAMBDA, either.

1:1 of nullable types to corresponding null values; no "Nothing"

Excellent summary, Thomas!

Mutable Actors

How would I write an actor which is a mutable single-item container:

Actor Leaf<t>[terminal:t] extends Tree<t> using
    setTerminal[u:t]:Void -> terminal := u 
    getTerminal[]:t -> terminal

I realise this is probably wrong but it shows what I am intending.

Also how do you override an operator in ActorScript? for example if I wanted to have uses of ':=' on the actor to call setTerminal or getTerminal.

Mutable leaves

What do you think of the following proposal for getting and setting:

Actor Leaf◅aType▻[aTerminal:aType]
   current := aTerminal
   extends Tree◅aType▻ using
       ⟦termainal⟧:aType → current,
       ⟦terminal ≔ nextTerminal:aType⟧:Leaf◅aType▻ →
           ⍠Leaf◅aType▻     // return this actor as Leaf◅aType▻ so that updates can be chained
             afterward current ≔ nextTerminal▮

Edit: Changed above to use ⍠Leaf◅aType▻ for clarity

Passing Mutated Leaves

It looks okay, although why do we have to use an Array? [:][:] doesn't seem as clear as a keyword like 'this'. Why do you have "[[terminal := nextTerminal:aType]]" and "afterward current := nextTerminal"

When I pass an actor in a message, I assume I am passing the address of the actor, not a copy of the actor.

Formatted messages delimted by ⟦ and ⟧

⟦terminal⟧ is a formatted message for getTerminal[ ] and ⟦terminal ≔ 3⟧ is a formatted for setTerminal[3].


The symbol section of the book says [| |] is for an array (p. 406)

Array references are a special case of formatted messages

Array references using ⟦ and ⟧ are a special case of formatted messages.

"this" reserved word doesn't always work

this reserved word doesn't always work:
* this is ambiguous identifier in nested Actors, which can cause confusion
* would still need this⍠Leaf◅aType▻ to indicate which interface of this Actor is being instead of just ⍠Leaf◅aType▻.

Nested Actors

You used [:][:] above, how does that relate to "[:]anInterface"?

I think introducing new operators makes it harder to learn the language. I also think that operators should have a predictable set of axioms (for example '+' should be associative and commutative, and any overloading of its meaning needs to preserve these properties.

A common way to return the 'outer' actor would be to assign 'this' to a different named variable in the outer actor, so that both 'this' and the new-variable are in scope in the inner actor so you can return whichever one you want. I don't think it really worth extra syntax just to make this one assignment shorter.

Changed ⍠⍠ to ⍠Leaf◅aType▻ for clarity

Good point!

I changed ⍠⍠ to ⍠Leaf◅aType▻ for clarity.

Literal Actors

if "1" is an actor, when I pass it in a message to another actor, it passes the address of the actor. In which case all addresses must already be used giving a unique address to each of the infinite natural numbers.

If we only instantiate the actor when a value (say "1") is used, then what do you put in the constructor message, as that would have to be the address of another actor.

An address for "1" can just be "1"

An address for "1" can just be "1". But a message cannot be sent to an address unless its type is known.

For clarity, expressions like Integer[3] and NonNegative[x] can be used for more specific typing.

Address Clashes

But what is some other actor happens to be assigned address "1" in memory? How do you distinguish between "1" and "1". Are you suggesting that each actor 'type' has its own address space ?

I am not sure what "Integer[3]" means? You already have ':' for typing, so what would be wrong with (3 : Integer) ?

In other languages, a constant like "3" is polymorphic in that it provides a "Numeric" interface, but we don't know if its Int/Float/Real etc. I don't think this is possible in ActorScript because we cannot have ungrounded type variables. This is a shame - What would be lost if the universal quantifier were to be added to the type system, allowing expressions to be typed like "(3 + 4 : forall t . (Numeric t) => t)".

Address clashes are impossible in Actor systems

Address clashes are impossible in Actor systems. It is important not to confuse an Actor address with a machine address.

units might help disambiguate?

Maybe you can type numbers with units to disambiguate what is meant by address. (This is a reference to several conversations on LtU about associating units with numbers, so more rational things happen in typing of arithmetic operations. Typical examples of units are things like seconds, centimeters, inches, dollars, etc.)

Most devs I know would assume address means machine address (a synonym for pointer). But your usage also makes perfect sense (arbitrary name used to address something whether number or otherwise), but is slightly more abstract. I find it hard to get people to bind to abstract definitions first when they know a concrete one.

If your language had some units syntax, it would also help in conversation. Just a random idea. (Edit: so what are the units for actor address? Actor[1]?)

Seconds[4], Centimeters[3]. etc. are perfectly fine

Seconds[4], Centimeters[3]. etc. are perfectly fine.

NonNegative[3] is of NonNegative type

NonNegative[3] is of NonNegative type.

Good suggestion: use 3:NonNegative to mean NonNegative[3]

Edited for clarity

Overly Resrictive?

If an actor has no state, is the restriction that it only processes one message at a time overly restrictive? It seems to me stateless actors could have many messages being processed without affecting each other, providing each message is processed using a different stack (effectively allowing multiple threads to be in the actor at one time).

Futher if there is no-state, and therefore no restriction on the parallelism of functions in the same actor, there is no need to group them into actors at all, and free functions express more parallelism (when there is no state) than actors.

So if we stick to the pure subset of a language like Haskell, there is more implicit parallelism than ActorScript.

An Actor can process many messages at a time

An Actor can process many messages at a time in two different ways:
1. The Actor Factorial can process arbitrarily many messages at a time
2. A readers/writers scheduler can be processing both read and write messages at the same time.

Has this changed?

I thought only one message could be processed by an actor at a time. How does this work with stateful actors where one message might result in an inconsistent state if another message is processed at the same time? As I said, if there is no state, what is the purpose of grouping a bunch of functions (message receivers and responses) into an Actor?


Actor Fork<t>[left:Tree<t>, right:Tree<t>] extends Tree<t> using
    getHash[]:Integer -> Hash<t>.[left.getHash[], right.getHash[]]
    getLeft[]:Tree<t> -> left
    getRight[]:Tree<t> -> right

There is no need to fix the set of functions associated with the Fork type. You could equally well do something like:

actor Fork<t>[left:Tree, right:Tree<t>] extends Tree<t>

getHash[Fork<t> f]:Integer -> Hash<t>.[f.left[].getHash[], f.right[].getHash[]]

getLeft[Fork<t> f]:Tree<t> -> f.left[]

getRight[Fork<t> f]:Tree<t> -> f.right[]