LtU Forum

Gradual Instantiation

Is anyone familiar with a language that allows for "gradual instantiation". What I mean by this, is that the initial algorithm can be stated using very vague types (but not no type) that are gradually narrowed until they reach machine language level. For example, Number -> Rational -> Float -> IEEE_754. or CustomerData -> ShippingCustomer -> AmazonCustomer -> {name, address, [items]} -> { String, {String,String,String, Number}, [String] } ... and so on. A language that supports this would need a monotonically reducing type system with some way of checking types at each level and detecting conflicts. For example: Number = Number + Number is fine, but Rational = Number + Number, requires that the RHS needs to be Rational as well (but no resolution specification, yet).

Covariance issue when extending "enter" part in BETA?

I spent bit of time recently reading up on BETA and gbeta, and I've run into a question that I haven't seen answered. I'm not sure if this is appropriate to LtU, but I honestly can't think of many other places where I could ask this and expect a meaningful/accurate response.

In BETA, when I extend or further-bind a pattern, I can add new part object declarations, as well as add to the "enter," "exit," and "do" parts. The thing is, wouldn't adding to the "enter" part mess with subtyping/covariance? Cast in more common OOP terminolgy, this would be like allowing a programmer to override an inherited virtual method and add additional parameters!

Do BETA/gbeta make this a run-time error? Do they just leave variables uninitialized in cases where not enough values are provided as input to a pattern? I haven't seen any statements on this point.

In Spark I had to put a bit of thought/effort into resolving this issue, so I wouldn't be surprised if there were prior work I just wasn't aware of.

Is Rx FRP?

There is some talk on Hackernews (see also this) that functional reactive programming should be defined as programming that is reactive and functional. Such "FRP" would then include event streaming dataflow programming systems like Rx. Is the dilution in terminology worth it and I'm just being pedantic, or is there some genuine confusion in hacker land?

Ambiguous language namespaces

It seems like every single language that has come out recently has a name already in use in someway: swift, go, elm.

So I'm wondering, how horrible would it be to call a new language "Make" or "Maker"?

Compositional let bindings

I have been working on compositional let bindings, and wanted to get some comments on what I have so far. Each program fragment is typed with a monomorphic input context (the free variable requirements for the fragment) and an output polymorphic context (the definitions exported from the fragment). Lambda abstraction works as before, removing a variable from the input-context of the rhs. Let is more interesting, it adds the defined variable to the output-context, but we also want it to be usable in expressions. My first attempt is to have the let binding act as the identity function, so that apply does the necessary substitutions (symmetrically) from definitions in one fragment to requirements in the other. This is what a typing derivation looks like:

(x = \z . (z, z)) (x 1, x true)

1. [var]		{z : a} |- z : a
2. [var]		{z : a} |- z : a
3. [prd (1) (2)]	{z : a, z : b} |- (z, z) : (a * b)
4. [abs z (3)]		|- (\z . (z, z)) : (a -> (a * a))
5. [let x (4)]		|- {|- x : (a -> (a * a))} (x = (\z . (z, z))) : (b -> b)
6. [var]		{x : a} |- x : a
7. [lit]		|- 1 : Int
8. [app (6) (7)]	{x : (Int -> a)} |- (x 1) : a
9. [var]		{x : a} |- x : a
10. [var]		|- true : Bool
11. [app (9) (10)]	{x : (Bool -> a)} |- (x true) : a
12. [prd (8) (11)]	{x : (Int -> a), x : (Bool -> b)} |- ((x 1), (x true)) : (a * b)
13. [app (5) (12)]	|- {|- x : (a -> (a * a))} ((x = (\z . (z, z))) ((x 1), (x true))) : ((Int * Int) * (Bool * Bool))

Does this approach seem reasonable? It seems that I can implement all sorts of weird scoping rules, as the definitions compose upwards, which is probably not desirable but can be fixed by clearing the polymorphic output context where appropriate.

Lambda: A Peek Under the Hood

Brian Goetz presented
Lambda: A Peek Under the Hood
on October 31st 2013

I found the interaction between the implementation and language design for lambdas interesting.
Its nice that the open ended solution with Invoke dynamic avoided over specification and allows for a good selection of future optimizations.

Artificial Intelligence

Does anyone cares about learning some of AI methods?

here is an 11 pages paper

Making implicits less powerful?

Many people feel that Scala implicits are powerful enough to replace all uses of Haskell type classes, but might be too powerful, because they make it too easy to shoot yourself in the foot.

The problem is caused by not just implicit values, but also implicitly invoked functions. These are necessary if you want to cover all the uses of type classes. For example, the statement "if values of type T can be compared, then values of type List<T> can be compared as well" corresponds to an implicitly invoked function that receives a Comparator<T> and returns a Comparator<List<T>>. Having implicitly invoked functions introduces an alternate model of computation into the language, allowing a simple form of logic programming. A similar thing happens in Haskell with type class constraints. The Agda designers decided against implicitly invoked functions for precisely this reason.

I was unable to find any consensus on how to make implicits weaker, to allow useful scenarios but prevent harmful ones. Scala has adopted several such restrictions, described in the links above. I feel that the Scala model might be still too permissive. Here's some more ideas for restrictions:

1) Require explicit importing of all implicit values defined in other modules, except the ones automatically defined by the language (e.g. Comparator<Pair<Int,Int>> or the function Comparator<T> -> Comparator<List<T>>). This might be too drastic though, especially if the language requires you to say "deriving Eq" and doesn't define stuff automatically.

2) Check for all possible ambiguities in implicit resolution, not just actual ones. Haskell's rules for instance contexts (7.6.3) are one way to make that decidable. In Scala I guess it's undecidable, but can't say for sure.

3) Allow only types whose definition is marked as "implicit" to be used as implicit arguments. For example, Comparator would be allowed to be implicit, but Int or String or function types wouldn't. This would also remove a potential ambiguity, because functions marked as "implicit" would be always treated as implicitly invoked functions, and never as implicit values.

Has anyone here given this some thought? What are the engineering considerations?

2014 APL Programming Competition is Open

The sixth annual International APL Problem Solving Competition is now live!

Dyalog Ltd invites students worldwide to put their programming and problem-solving skills to the test by using any APL system to develop solutions to ten questions and solve a series of problems. This is a contest for people who love a challenge and learning new things for fun, with the added bonus that you can win one of 43 cash prizes totalling $8,500, including a grand prize of $2,500 and a trip to Eastbourne in the U.K. to attend the annual Dyalog Ltd user meeting in September 2014.

For the rules and eligibility criteria and to enter the competition, go to http://www.dyalogaplcompetition.com/.

If you have friends who love a challenge and learning new things for fun, or you know students who might be interested in participating, then please recommend this contest to them.

The deadline for submitting solutions is 6 August 2014. Winners will be announced on 18 August 2014.

Good luck and have fun!

XML feed