The Avail programming language

The Avail programming language was released yesterday, but it didn't receive the attention it deserves.

Avail is a multi-paradigmatic general purpose programming language whose feature set emphasizes support for articulate programming
I believe the authors are being a bit modest!

The most interesting thing about Avail is it's programmable (turing-complete) type system that enables strongly typed multiple method dispatch. To me, Avail appears to be a happy marriage between smalltalk, maude, and (dare i say) coq?

Although, I'm not totally sold on the persuasive use of unicode - yet.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

What's new?

I saw this last night and all I concluded after looking at it for a bit is that I hate this documentation format. Does someone have enough of an understanding to compare and contrast this to the usual suspects? Does it have parametric polymorphism (see the example in the red box at the bottom of this link)?

Only two mentions of

Only two mentions of polymorphism on the whole site, so they're clearly using non-standard terminology. I believe it does support parametric polymorphism however, as described on the type parameterization page:

A type that depends upon one or more other values is incomplete. Given an incomplete type A that depends upon a value B, it is said that B is a type parameter of A or, alternatively, that A is parametric on B. A type is considered complete if it either 1) does not have any type parameters or 2) has an actual value bound to each of its type parameters.

Consider the type set. In Avail, this type is incomplete. It has a type parameter, the element type, that restricts the membership of each instance to values of a particular type. The type set of {…}ᵀ is the most general completion, imposing no membership restrictions on its instances. The type set of {…, −1, 0, 1, …}ᵀ, on the other hand, has as instances only those sets whose members are exclusively integers.

I don't think so

I don't think it directly supports it as you're not allowed to have a value that is of incomplete type and there are no type variables that I see. So I believe you can have a type family List[_], and have values of type List[Int], but not functions of type a -> List[a]. You can get something similar with Semantic Restrictions, but verbose and with nothing checked until run-time when all of the types are monomorphic.

"incomplete type"

The "incomplete type" concept is for discussing Avail; it's technically not a property of the type system itself, as only complete types occur. Perhaps "type family" would be more appropriate, considering the confusion.

As an example, the type

As an example, the type produced by the expression "tuple" (without quotes), is the same type produced by the expression "<any…|>" or "<any…|0..∞>", or even "<any…|0..42> ∪ <any…|43..∞>". We talk about tuple being an incomplete type (or type family), even though it's an actual type that lives at the top of a lattice of tuple types.

The same doesn't quite hold for variable types, however. A variable is parameterized by its content type, the kinds of things it can hold. Because we can read from variables, the variable type covaries with its content type. Because we can write to them, the variable type also has to contravary with its content type. So the type graph is actually just an infinitude of incomparable variable types. But we still want to talk about variable types in general, so we say "variable" is an incomplete type.

As it turns out, that's a bit of a fib for variables, since we decided to split the read and write capabilities into two orthogonal parameterizations, forming a lovely diamond lattice... where pretty much only the middle horizontal line of mutually disjoint types occurs in practice! But as a consequence we get a most-general variable type, from which you can read ⊤ and write ⊥. So technically that variable type is more general than everything in the variable type family, and therefore is a suitable representative.

Semantic restriction, take 2

With 1 day of studying the language, this is what I think is the idea.

Semantic restrictions are user-defined methods that are called during compile time to strengthen (or specialise) types of the generic method at the call-site.

Here is a generic method that enlists (or rather entuples) a generic element, which is further restricted to be the ‘instantiation’ type, as per following snippet:

Method "enlist_" is [ x : any | <x> ];
Semantic restriction "enlist_" is [ x : any's type | <x…|1>] : type;

a : <integer…|1..>;          /* type: a tuple of integers with at least 1 element */
a := enlist 1;              /* returns <1>
a := enlist 1 ++ enlist 2;  /* returns <1,2>
a := enlist "a";            /* fails to compile */

I'm not sure whether you're

I'm not sure whether you're referring to the use of /* */ comments, the descriptive narrative, or even the color of the code bubble. Please clarify.

And yes, it has very deep parametric polymorphism. As just a quick example, the type <[0..10], {string…|2..2}, float…|2..5> is the type of tuples that have between two and five elements, where the first element is an integer between zero and ten inclusive, the second element is a set containing exactly two strings, and the third through fifth elements, should the tuple have that many elements, are all floats. The tuple <7, {"foo", "bar"}, 1.0, 2.0> is a member of that type. That type can be used as a parameter type in a method, which vouches at compile time for the operations performed on the elements of the tuple at runtime. If the parameter name is t, the expression |t[2]| + |t| is statically permitted in the method, and has a static type of [4..7].

The type system is described quite thoroughly on the site, and well worth a look if this piques your interest.

Thanks

That's an impressive example, but I don't consider it parametric polymorphism -- where's the parameter? Consider a 'sort' function. Is there an easy way (preferably inference!) to give this the type "forall a. List(a) -> List(a)" with 'a' being the type parameter? It doesn't have to take that exact form, but it should let you call sort with List(Int) and have the system infer that the result is List(Int). Or can you only type it "List(Any) -> List(Any)" and then impose a semantic restriction to recover something equivalent to parametric polymorphism?

The type parameters are a

The type parameters are a little bit disguised in the syntax. Or more correctly, they're presented with more pleasant, specific mechanisms than what you may be used to.

<[0..10], {string…|2..2}, float…|2..5>

versus

Tuple[IntRange[2,5],IntRange[0,10] * Set[IntRange[2,2], String] * Float]

I really can't think of a more concise way of expressing the almost-equivalent of the Avail type using the functional notations that I have any familiarity with.

But you're right about the functions not being parametrically typed. I actually rejected that approach almost right away, in deference to semantic restrictions. Most functions don't really need it, but when one does need it in the presence of deeply parameterized types like Avail's, one generally can't just pass the parameter type straight through. Yes for some things like the function application operation "_(«_‡,»)", parentheses "(_)", tuple construction "<«_‡,»>", etc., but not for things like tuple subscripting "_[_]", where knowledge about the index's range influences the resulting type. That just can't be done with traditional parametric polymorphism. And those cases not only dominate the type narrowing that happens in Avail, but I couldn't figure out any other way to solve it without having to use really weak types. Suggestions?

The one other aspect of semantic restrictions is to ensure the types of the inputs are mutually appropriate. Parametric polymorphism had the same kinds and degree of usage holes here as for deriving the output types. For example, tuple subscripting "_[_]" takes a tuple and an index, but the semantic restriction checks statically that the possible values for the index are in bounds for the tuple type (and then produces the type union of the possible tuple element types). Technically, having the programmer ensure the tuple indices are always statically in bounds at every call site was far too onerous, so we weakened the requirement so that if any of the possible indices would be in range, we allow the call site -- but then we check the bounds at runtime. That's a general policy of the Avail library: any time we discover that satisfying the static precondition of some operation is systematically too difficult, we weaken it to only fail the cases that would always fail.

So you have parameterized

So you have parameterized types but not quantified types. Just FYI, I think most people and the academic literature understand 'parametric polymorphism' to mean the latter (see the Wikipedia article for example).

The kinds of usage holes you describe for parametric polymorphism are commonly addressed under the name dependent type. For example, you might write give a type such as "vector-concat : Vec(n), Vec(m) -> Vec (m+n)".

I, like many people around here, have my own work-in-progress language design and it explores a similar region of design space (customizable types), so I have some strong opinions. (I haven't yet published anything about it in a form other than forum comment, though). I think parametric polymorphism is pretty important because it allows you to work in an abstract setting (let T be a type, let f : T -> T, etc.) and then package up the work you've done into a function or module in a minimal hassle way.

Similarly, for the kinds of precise types you're talking about, I think dependent types are important. How often are you going to want to hardcode a type like the one you gave above? It seems to me that usually you'll be working in a setting where you have some variables around (number of objects in the world, etc) and you'll want to reference those variables in your types. So rather than <[0..10], ...> you'll want <[0..n], ...> where n is a variable. It sounds like you've written a good bit of a code in this language. Have you not noticed that desire?

But you're right that the problems of type membership that you might like to solve quickly become very difficult (or formally undecidable) when you have full dependent types or even the kind of parameterized types that you have. The way I look at it, the type system serves more than one role. One of them is to infer a coarse shape for variables (Integer vs. String) so that we can do things like overload resolution or type-driven meta-programing. This can be done efficiently and mostly automatically -- check out the Hindley–Milner type system if you aren't already familiar with it or one of the languages that uses something like it (e.g. ML, Haskell, ...).

Then we have types that embody more precise assertions about our code. These are useful for lightweight theorem proving, and it's nice to be able to make them very precise, but important that we not be required to actually prove that the precise types are used correctly. So I take a similar position to the one you do with semantic restrictions in this regard - theorem proving is hard and programmers are busy enough. I would like the language to support such proof, though, when desired.

Thank you for the

Thank you for the terminology clarification. I've noticed the desire for a mechanism for parameterizing methods, but it's directly inconsistent with one of the simplest requirements of my language: That types, even function types, always be comparable. I don't believe it's possible with parameterized types, since I think it requires second-order logic (or solving the Halting Problem, depending on the power of the type system) just to decide if one function type is a specialization of another. Note that this is much harder than determining if a particular function satisfies a particular statically expressed parametric type (which is usually very quick, but takes exponential time in the exceedingly unlikely worst case, if I recall).

Functions have to be first-class objects in Avail, and as such they have to have types that can be compared with other types. So the manipulation and checking of what would in other languages be parametric types must instead be dealt with via Avail's semantic restrictions.

An alternative, imperative view of semantic restrictions is that of a simple cast mechanism. Avail's type system can't prove that a method "foo_" with body [x : integer | x - x] always produces zero, but a semantic restriction can at least state it, and the rest of the program can then rely on it. Then the subexpression "10 ÷ foo 5" could fail to parse because of a mandatory division-by-zero error. Possible divisions by zero are permitted, but there's no way this division can succeed. Again, the semantic restriction on "_÷_" determines this policy. If you want different rules, just define a different "_÷_" method and don't import the library one. Embedding this policy directly in a programmer-inaccessible type system would be doomed to failure because of the sheer number of policy decisions.

I have read in the past about the Hindley-Milner type system, and the Milner-Mycroft type inferencing algorithm, but I think semantic restrictions actually provide the missing piece of the puzzle. Arbitrary code is provided to perform the required semi-unification, so there are more cases that can fall to its might than to a type system which has a pre-coded algorithm for resolution.

Perhaps there's a way to do both? What do you think?

What do you mean and why?

[...] one of the simplest requirements of my language: That types, even function types, always be comparable.

What do you mean that types must be comparable? Do you mean that given two types you must be able to decide if one is a super-type of the other? If so, why is that a requirement? Is it to obtain that 'first order type safety' property you mentioned? If so, I don't follow the logic here. If you limit yourself to unquantified types, typing becomes easier to solve, but if you want to express quantification over types or values (which in my experience you very frequently do!) you have to do so as semantic restrictions which aren't checked. If you added quantified types, the general case of typing gets harder, but only in the cases you couldn't express before. Furthermore, there are plenty of useful quantified cases (like sort) that you will be able to express without resorting to writing semantic conditions manually, and in fact they can often be inferred without writing any code and they will actually be verified by the type system, unlike semantic conditions.

With regard to your example, is "this code always fails" really a useful thing to report? This is catching the kind of bug that testing will catch 90% of the time. Isn't the more useful thing to report either "here's an example case where this type/assertion fails" or "couldn't find an example of this code failing, but couldn't prove that it succeeds, either"?

Unlike functional languages,

Unlike functional languages, Avail has objects and methods (multi-methods, actually). To dispatch a method, one has to decide which matching method implementation is most specific, i.e., has the most specific type signature. Quantified types should be able to decidably select which methods are applicable, but not which method is most specific. This requirement is the core reason why functional languages cannot have an extensible object system. I'm sure there has been some very clever work done in the last few decades on this subject (I haven't been paying much attention), but I would be very surprised if someone found a way which is promising.

Semantic restrictions are themselves written as Avail code, so they have *some* internal safety. Avail has reflective types as well (they're objects), and those metatypes have a covariance relation to the types that they're metatypes of (a relation I call metacovariance), so the broad strokes are hard to get wrong in a semantic restriction. The details can often be tricky, however. But that's because one is usually dealing with far more precision than quantified types. That's because writing semantic restrictions in the equivalent of creating a type *system*, not a collection of types. If they're correct (equivalent to a type system being correct), then the resulting program is free of runtime type errors (ditto). Perhaps there's a way of specifying the equivalent of quantified types in a more direct way for methods that need them?

Consider function application: "_(«_‡,»)". The semantic restriction ensures the first argument (the function) is itself a function that statically accepts the tuple of (comma-separated) arguments captured in the second argument. Similarly the generalized currying partial-application method "_(««_»«`_»?‡,»)" does the same thing for the supplied arguments and place-holder underscores, but produces a new function with the exact correct type. E.g.,

f ::= [a : integer, b : byte, c : double | a + b + c];
g ::= f(_,255,_);
h : [integer, double]→double := g;
Print: g(3000,3.6);
/* produces 3258.6 */

In this example, g is statically known to be a function of type [integer, double]→double. Avail has no built-in support for currying, and yet it is able to define it safely within the language. BUT - because we're defining something so powerful, the type safety is for its use, not its implementation. Similar to a functional language having a type system at least partially implemented in C.

As for sorting, there are plenty of ways to address it. It's really no harder than the function application. The source is available for download, or if you don't want to install it you can look up the documentation for almost every method in the library through the stacks interface. I just looked for the word "sort" and it shows various methods and restrictions, but the semantic restriction doesn't yet mention the requirement that the function must (statically) accept any two elements of the tuple. Oops. It is a dev release, after all :-). And perhaps you can find a way to express the restriction more generally. Remember, this is Avail, so you don't have to directly invoke "Semantic restriction_is_" to have that effect.

So... the specific example of detecting division by zero could catch certain kinds of unlikely bugs, but really it was intended as an accessible example. Division by zero is often used in this kind of example because it's related to a tidy piece of math (taught early) that nobody sensibly refutes. A better fit for what you're looking for might be tuple subscripting, detecting when someone is definitely accessing an element that's beyond the range of a tuple. Again, you can use the stacks documentation link to find "_[_]", then in the Semantic restrictions area look at the entry for tuple meta, natural number's type.

The Avail team's practical experience is that the type system is sufficiently strong that after battling (negotiating with? having an enlightening discussion with?) the compiler and type system, the resulting code not only is almost always correct, but has flushed out and corrected many initial design limitations ahead of time. Give it a try and see what you think.

Dispatch

To dispatch a method, one has to decide which matching method implementation is most specific, i.e., has the most specific type signature.

I hinted at this in an earlier post, but can you give an example where it's important to dispatch on a precise type? For example, why would I ever want to select different behavior when an argument has type Nat{>2} vs. when it has type Nat{>5 and even}? I don't think I would. So my position is that we can infer the coarse shape of values in a decidable way and use that for dispatch.

Could you explain or link to documentation on how to parse _(««_»«`_»?‡,»)?

Unfortunately, we're still

Unfortunately, we're still missing an in-depth description of this syntax on the website. If I recall, MessageSplitter.java has some hefty comments describing the clauses that can occur, but let me spec this particular name here...

_(««_»«`_»?‡,»)

The first _ corresponds with a place that an argument goes, in this case some subexpression that produces a function. I'm sure you got that one.

The open-parenthesis is a literal character that should occur after that argument. As with all individual operator characters or runs of alphanumerics, whitespace is permitted around it (and is required between two alphanumeric tokens).

Now it gets interesting. The outer guillemet group "««_»«`_»?‡,»" has two components, the part before the double-dagger (‡) and the literal comma (,) that comes after it. This means we can have a repetition of the left side zero or more times, separated by commas. The thing on the left is "«_»«`_»?". This is itself composed of two parts, a repetition of an argument (which will be constrained by the method signature to have zero or one occurrences), and «`_»?". The question mark after the guillemet group means the group is optional, and its presence or absence should be indicated by pushing the constant true or the constant false, respectively. Inside that optional guillemet group is a back-tick (`) and the underscore character. The back-tick escapes a metacharacter, treating it like an ordinary operator. In this case it means we accept an actual underscore token (_) at the call site.

The semantic restriction ensures we have the right number of comma-separated arguments-or-underscores for the type of function being curried, and that the argument expressions have suitably strong types for the function's expected argument types. The semantic restriction also makes sure there is at least one underscore (otherwise the call site is talking about a simple function invocation, "_(«_‡,»)".

The semantic restriction also strengthens the resulting function type, stating that the returned function must produce the same type as the original function, but takes arguments that correspond to the input types where the arguments occurred.

The implementation should (does) agree with the semantic restriction, and constructs a simple function with the expected input and output types -- at runtime -- with the assistance of a convenient primitive that does that tiny bit of work.

Let me know if I missed anything or was unclear. There are a few other interesting guillemet group modifiers described in MessageSplitter.java. We also have a few more powerful ones planned, for example circled numbers to be used by renames that re-order arguments.

Finally, your question about dispatching on values... Introspection sort, fibonacci, and a host of other operations lend themselves to dispatching by count. Even the currying operation above relies on a zero-or-one sized tuple type to treat a guillemet group as an optional thing rather than a repeated thing.

Introspection sort,

Introspection sort, fibonacci, and a host of other operations lend themselves to dispatching by count.

It's not clear that this is "important" though, which was one of Matt's criteria. If a subset of numbers are really important, you probably want to keep them distinct from the set of numbers used to represent them anyway, which means they'll have their own type descriptor on which to dispatch. It's not clear that moving program terms into dispatching logic is really that beneficial (which is probably one reason predicate dispatching hasn't caught on).

The only places I'd use subset constraints are in partial functions for the range that's undefined (like 0 in division), in which case I want a compile-time error for values that are out of range. I don't see the advantage in dispatching on counts on the range for which a function is defined.

Predicate dispatching (like

Predicate dispatching (like the groundbreaking work in Cecil) was an interesting idea, but I don't like the idea of an object changing its type. I think that's my big issue with it. There's also the (solvable) problem of how to specify a partial order of "named" predicates (like Eiffel's named invariants) so that the dispatch logic can choose the most specific case.

Avail allows a version of this directly, but only by checking all of the invariants at instantiation time and attaching "natures" to the object, which are later used to influence dispatch. Note that the type of an object can't change, but the (immutable) state of the object still influences its type in an arbitrary way.

Avail's object model also allows embedded mutable variables to be shared between multiple objects, providing an interesting type-safe non-delegating form of instance inheritance like Sᴇʟꜰ's (that's Self if my font trick doesn't work) parent slots. It wasn't designed with that in mind, but I noticed the correspondence between the two ideas in the late '90s.

Thanks for the explanation

Thanks for the explanation of your syntax. That makes sense.

Finally, your question about dispatching on values... Introspection sort, fibonacci, and a host of other operations lend themselves to dispatching by count.

My question was about dispatching on types. I infer from your answer, though, that your rule is to dispatch to the most specific function (this is where you need the subtype comparisons) for which the arguments are each of the corresponding parameters' type. I guess that makes sense. It's quite different from how my setup works, but it sounds like an interesting language. Good luck!

I leave you with the suggestion to try to bolt on a syntax for quantified types in a way that allows you to write e.g. 'a -> a' and get back 'Any -> Any' with an implicit semantic restriction. :)

Thanks for that! We

Thanks for that! We certainly generate semantic restrictions like that in other circumstances, but this particular one would be a handy shortcut for those places where quantified typing is all that's needed.

Open source?

Several parts are interesting to me. I'll edit this post later to add more comments, unless I find out it's planned to be proprietary, in which case I'll say less. Is part of the project open source? (Is it possible you mean s/persuasive/pervasive/?)

Edit: yes, the first paragraph at the top of www.availlang.org says it's open source with a 3-clause BSD license, which sounds good to me. The virtual machine, and presumably the whole dev environment, runs on top of Java. So it seems a Java extension of sorts. (This obviates my personal interest, since I'm only interested in lightweight processes and coroutines in code that can run in C or C++.)

fibers

Avail doesn't extend java but does run on top of the jvm - like scala and clojure.

Avail has light-weight fibers (via reified immutable continuations) that are multiplexed on top of jvm threads.

Java core

Java is the implementation language, that's all. Well, and we also have POJO (plain old Java object) interface which hasn't gotten a great deal of use. POJOs have a sufficiently precise type in Avail that we're even able to unerase generics, potentially making it safer to use Java objects in Avail than in Java!

The Avail VM used to be written in Smalltalk and C++ (most of the C++ part being mechanically translated at will from Smalltalk), and it was only in the last few years that we ported it onto Java. We're quite likely to move it entirely off Java and onto either LLVM or Clang (leaning toward Clang for its licence freedom) in the next year or two.

use of fibers is cool

Semantics carry over from implementation to runtime, so it's not a clean disconnect. Care and feeding of the jvm runtime is a runtime production deployment dependency, until you do it another way. I'm just observing, not complaining. When you move, I might be able to use it in something that cannot embed a jvm runtime.

I like your use of fibers and coroutines, which is my main interest in the language. (I'm fairly indifferent to what a language looks like. I care mainly about runtime behavior. If a language's runtime behavior cannot be described except in terms of something very complex about syntax and type models, the resulting second class status of runtime behavior makes a language less desirable, as a matter of personal taste.) I'll keep track and look for an opportunity to use it later.

We appreciate your concern

We appreciate your concern about the runtime dependency, which is why we moved it to Java in the first place -- from Smalltalk and C++, neither of which was as ubiquitous as Java (on desktops, our initial broad target). Java's pretty clearly a temporary step for us.

You may want to check out VariableDescriptor.java and its subclasses to see how and when we elide locks and memory consistency without impacting the actual visible consistency. There are some pretty novel ideas in there, that are basically only applicable to Avail's other novel design choices. The Synchronization.avail module and its neighbors show how we cribbed park/unpark from Java (in a way that makes sense for fibers multiplexed through a work pool), and built up various constructs from that and atomic variable operations. If you think you see a bug or significant inefficiency in there, please let us know.

Your comment about semantics over syntax (to simplify) are exactly where Avail is coming from. The programmer essentially can't define anything new in the language, because it's already there. Even the way the object hierarchy works is semantically fixed by the VM, and all one can do is choose how to use it.

semantic restrictions

I believe parametric polymorphism can be achieved by 'strengthening' any type via semantic restrictions.

Yeah

It does look like that would work, but it would be verbose (a big semantic restriction clause following each polymorphic function). And if you read at the bottom of that page they explain that the type system is customizable but makes no attempt at soundness, so I'm guessing the influence of Coq was minimal :).

The emphasis of the language appears to be on customization: syntax and type system. Seems like someone posted to LtU a couple of years ago about their similar customizable type system, and I've seen several similar extensible syntax proposals. I don't think either of these is going to be enough to sell the language, and I'm not sure what else novel there is. I'm with Sean that I'll wait for someone to explain more clearly what else is novel and why it's an improvement.

Edit: But I notice there is a team of 6 people working on this, so maybe I'm being too dismissive.

Soundness

Coq style correctness was out of scope for Avail's feature set. My goal at the time wasn't to provide a system so straight-jacketed that only the relatively narrow subset of correct programs would compile (although a lot has happened on that front since 1993!).

Instead we strove for first-order type safety. That is, any expression is sound as long as all relevant semantic restrictions happen to be correct. So it's second-order type unsafe. It seems to be a pretty good compromise, although we run into wrong semantic restrictions from time to time. But some of those semantic restrictions (see the one for integer exponentiation, "_^_") feel like they're far too complex and powerful to be the consequence of a formal proof. If you can figure out how, we're certainly looking to grow our team.

Four days in and the team is

Four days in and the team is already growing...

Anyone can choose their own

Anyone can choose their own clothes to wear and call themselves a designer (Buxton?). Anyone can choose their own language features for a language and call themselves a language designer.

I'm not seeing a coherent cohesive story in their documentation, just a bunch of features that are poorly presented, but maybe my presentations standards are too high. If it's worthwhile, I'm sure someone else will pick it up, understand it better than me, and blog about it eventually.

They certainly have an

They certainly have an impressive feature list. Type system-wise, looks like they have an undecidable sets-as-types interpretation supporting type union, intersection and subtyping/subset relations. Not a fan of the syntax though.

Anyone figured out what this is about?

A mechanism for observing expressions rather than values, thereby enabling a novel method of efficient, transactional change propagation that is more natural and powerful than the observer pattern or functional reactive programming.

"Observing expressions

"Observing expressions rather than values"? Sounds like fexprs...

Observerless

Variables are first-class objects in Avail, whether they're module variables, local (and then perhaps captured) variables, or variables that are explicitly instantiated for some reason. Everything else is immutable. That factoring allows us to focus time on provided features that are tied to tracking mutable state, without having to deal with the variety of special cases that other languages might leave separate (instance variables, local variables, globals/statics).

Now stay with me. We also (currently) implement every Avail value as an instance of the final Java class AvailObject, which has a field called descriptor, of type Descriptor. Descriptor has many subclasses, instances of which may make the object behave like a set (SetDescriptor), a tuple (TupleDescriptor), a tuple of bytes with packed representation (ByteTupleDescriptor), Latin-1 strings (ByteStringDescriptor), etc. We also have IndirectionDescriptor to forward all messages to effect a change of representation (e.g., the expression "a" = "a" not only produces true, but will probably replace one of the AvailObjects' descriptors with an IndirectionDescriptor, plugging a forwarding pointer into a suitable place in the AvailObject. Subsequent compares won't need to examine the characters -- after traversal of indirections they'll simply be the same Java AvailObject and be trivially equal.

There are several Descriptor subclasses for representing variables. Some of these implement variables that can be reached from multiple fibers (and therefore multiple Java Threads running on separate cores). Some are for variables that are currently only reachable from one fiber (and don't require locks *or* memory consistency). And some specify a kind of variable that's supposed to be tracked in some way. A fiber (and cached state in the Interpreter that's running it at that moment) keeps flags that indicate whether the fiber is running in a special state.

Avail provides primitives to enable/disable a flag that causes every read access of any variable to cause that variable to be added to a (fiber-specific, weak) set of variables. Another primitive allows a "write reactor" to be added to a variable, which is just a nullary function that runs whenever the variable changes. Combining these two mechanisms, we can run an expression like [a!+b*7] while tracking variable reads, thereby collecting the variables a and b. We can then add a write reactor to a and b, which, say, causes a text field somewhere (theoretically - Avail has no graphical UI yet) to be updated with the new result of running the function [a!+b*7]. Thus, a slider control that's wired up to change b will cause the write reactor to trip, which will cause a!+b*7 to be computed and updated in the text field. Note that this works even if a and b didn't lexically appear in the body of the function, but were accessed by a helper method ten layers down.

We call this pattern "Observerless" because we don't have to specify which variables need to have change tracking added to them -- we specify a function that produces a value to display somewhere, and let the execution machinery tell us when the function would produce a different value.

Technically, we also keep track of variables written before being read, and exclude them from the set of dependency variables when running the [a!+b*7] function.

We will eventually have to do something special for primitives that do things like getting the current time from the OS, or read from a file. Case-by-case, I believe.

This is weird

This is similar to what I've been looking into, but if you're going to make a language be verbose like this for readability, why make it so unreadable?

Depends on the DSL

J is terse and readable (for experts). Java is both verbose and readable (for novices). What determines readability anyway? Both J and Java can be embedded in Avail as DSLs. Does that make Avail unreadable?

I'm just looking at the

I'm just looking at the examples on the website. It seems as though different parts of the language waffle between literate constructs and absurdly terse functional programming constructs. The literate seems at times to be awkward:

Use newGame as the implied game and do

And the terseness shows up elsewhere:

"_st|nd|rd|th Fibonacci number"

As a programmer that's used a fairly wide variety of languages, I expect to be able to read code and at least have a vague idea of what's going on - especially in the basic intro programs.

Maybe it's an issue with the tutorials or with my ability to grok code rather than the language itself, but if I can't look at code at get it, there's no way someone who's only ever seen Java will be able to.

The grammar

The grammar of Avail is actually pretty easy to discuss, both verbally and textually. Things like "_+_" we pronounce "blank plus blank". So if you don't like some specific syntax, we'd love to hear discussion of specific syntactic improvements. You can also play around with it until you have a form you like, then tell us about your experiences. The easiest way to play is to define your own module that imports the Avail module, then uses negated imports (like -"_+_") to block some things and renamed imports ("_+_" → "_aggregate with_") to change other things. There's no need to mess around with parsers or parser generators or even awkward BNF expressions.

APL versus COBOL

From what I've seen from the website, the standard library code (in some parts) is very close to natural language. But such language is too verbose for my taste (although I can appreciate the possible (user) benefits).

In particular, I think the syntactic declaration of Methods and their semantic restriction (the most used constructs) should be shortened. I don't like to type much, if I can help it.
All this is personal preference of course: I'm recovering from my own programming language Enchilada which can be considered as an APL-ish postfix language.

That said, I truly believe that Avail is worthy to be seriously considered and studied for its unique merits.