LtU Forum

Haskell for Mac

Available here with hackernews and reddit discussions ongoing.

Even though I'm not a big fan of Haskell, I'm pretty excited about this. It represents a trend where PL is finally taking holistic programmer experiences seriously, and a move toward interactivity in program development that takes advantage of our (a) rich type systems, and (b) increasing budget of computer cycles. Even that they are trying to sell this is good: if people can get used to paying for tooling, that will encourage even more tooling via a healthy market feedback loop. The only drawback is the MAS sandbox, app stores need to learn how to accept developer tools without crippling them.

Python, Machine Learning, and Language Wars. A Highly Subjective Point of View

A nice article that describes tradeoffs made in choosing a PL for a domain-specific field.

Another "big" question

To continue the interesting prognostication thread, here is another question. Several scientific fields have become increasingly reliant on programming - ranging from sophisticated data analysis to various kinds of standard simulation methodologies. Thus far most of this work is done in conventional languages, with R being the notable exception, being a language mostly dedicated to statistical data analysis. However, as far as statistical analysis goes, R is general purpose -- it is not tied to any specific scientific field [clarification 1, clarification 2]. So the question is whether you think that in the foreseeable future (say 5-15 years) at least one scientific field will make significant use (over 5% market share) of a domain specific language whose functionality (expressiveness) or correctness guarantees will be specific to the scientific enterprise of the field.

It might be interesting to connect the discussion of this question to issues of "open science", the rise in post-publication peer-review, reproducability and so on.

Will scientific fields give rise to hegemonic domain specific languages (within 5-15 years)?
 
pollcode.com free polls

word2vec

So I made some claims in another topic that the future of programming might be intertwined with ML. I think word2vec provides some interesting evidence to this claim. From the project page:

simple way to investigate the learned representations is to find the closest words for a user-specified word. The distance tool serves that purpose. For example, if you enter 'france', distance will display the most similar words and their distances to 'france', which should look like:

                 Word       Cosine distance
-------------------------------------------
                spain              0.678515
              belgium              0.665923
          netherlands              0.652428
                italy              0.633130
          switzerland              0.622323
           luxembourg              0.610033
             portugal              0.577154
               russia              0.571507
              germany              0.563291
            catalonia              0.534176

Of course, we totally see this inferred relationship as a "type" for country (well, if you are OO inclined). Type then is related to distance in a vector space. These vectors have very interesting type-like properties that manifest as inferred analogies; consider:

It was recently shown that the word vectors capture many linguistic regularities, for example vector operations vector('Paris') - vector('France') + vector('Italy') results in a vector that is very close to vector('Rome'), and vector('king') - vector('man') + vector('woman') is close to vector('queen') [3, 1]. You can try out a simple demo by running demo-analogy.sh.

I believe this could lead to some interesting augmentation in PL, in that types can then be used to find useful abstractions in a large corpus of code. But it probably requires an adjustment in how we think about types. The approach is also biased to OO types, but I would love to hear alternative interpretations.

Unstructured casting considered harmful to security

Unstructured casting (e.g. Java, C#, C++, etc.) can be harmful to security.

Structured casting consists of the following:

1: Casting self to an interface implemented by this Actor
2: Upcasting
  a) an Actor of an implementation type to the interface type of the implementation
  b) an Actor of an interface type to the interface type that was extended 
3: Conditional downcasting of an Actor of an interface type to an extension interface type.  (An implementation type cannot be downcast because there is nothing to which to downcast.) 

Claim: All other casting is unstructured and should be prohibited.
Note that this means that an implementation cannot be subtyped although an implementation can use other implementations for modularity.

Edit: The above was clarified as a result of a perceptive FriAM comment by Marc Stiegler

Actor DepositOnlyAccount[initialBalance:Euro] uses SimpleAccount[initialBalance]。 
 implements Account using
     deposit[anAmount] →
        ⍠AccountSimpleAccount.deposit[anAmount]¶
          // use deposit message handler from SimpleAccount (see below)
     getBalance[ ] → ⦻¶      // always throw exception
     withdraw[anAmount:Euro] → ⦻§▮   // always throw exception

As a result of the above definition, DepositOnlyAccount⊒Account and
DepositOnlyAccount has message handlers with the following signatures:


        getBalance[ ] ↦ ⦻,   //  always throws exception
        withdraw[ ] ↦ ⦻,     //  always throws exception
        deposit[Euro] ↦ Void

The above makes use of the following:

Interface Account with
       getBalance[ ]↦Euro, 
       deposit[Euro]↦Void,
       withdraw[Euro]↦VoidActor SimpleAccount[startingBalance:Euro]
  myBalance ≔ startingBalance。
      // myBalance is an assignable variable
         // initialized with startingBalance
  implements Account using
     getBalance[ ] →   myBalance¶
     deposit[anAmount] → 
       Void                    // return Void
          afterward  myBalance ≔ myBalance+anAmount¶   
               // the next  message is processed with
                        //  myBalance reflecting the deposit
     withdraw[anAmount:Euro]:Void →
       (amount > myBalance) � 
          TrueThrow  Overdrawn[ ] ⍌
          FalseVoid           //  return Void
                    afterward myBalance ≔ myBalance–anAmount ⍰§▮
                 //  the next  message is processed with updated myBalance 

Harnessing Curiosity to Increase Correctness in End-User Programming

Harnessing Curiosity to Increase Correctness in End-User Programming. Aaron Wilson, Margaret Burnett, Laura Beckwith, Orion Granatir, Ledah Casburn, Curtis Cook, Mike Durham, and Gregg Rothermel. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '03). (ACM paywalled link).

Despite their ability to help with program correctness, assertions have been notoriously unpopular--even with professional programmers. End-user programmers seem even less likely to appreciate the value of assertions; yet end-user programs suffer from serious correctness problems that assertions could help detect. This leads to the following question: can end users be enticed to enter assertions? To investigate this question, we have devised a curiosity-centered approach to eliciting assertions from end users, built on a surprise-explain-reward strategy. Our follow-up work with end-user participants shows that the approach is effective in encouraging end users to enter assertions that help them find errors.

Via a seminar on Human Factors in Programming Languages, by Eric Walkingshaw. To quote Eric's blurb:

This paper introduces the surprise-explain-reward strategy in the context of encouraging end-user programmers to test their programs. Attention investment provides a theory about how users decide where to spend their attention based on cost, risk, and reward. Surprise-explain-reward provides a strategy for altering this equation. Specifically, it attempts to lower the (perceived and actual) costs associated with learning a new feature, while making the reward more immediate and clear.

OcaPic: Programming PIC microcontrollers in OCaml

Most embedded systems development is done in C. It's rare to see a functional programming language target any kind of microcontroller, let alone an 8-bit microcontroller with only a few kB of RAM. But the team behind the OcaPic project has somehow managed to get OCaml running on a PIC18 microcontroller. To do so, they created an efficient OCaml virtual machine in PIC assembler (~4kB of program memory), and utilized some clever techniques to postprocess the compiled bytecode to reduce heap usage, eliminate unused closures, reduce indirections, and compress the bytecode representation. Even if you're not interested in embedded systems, you may find some interesting ideas there for reducing overheads or dealing with constrained resource budgets.

Nullable type is needed to fix Tony Hoare's "billion dollar mistake".

The Nullable type is needed to fix Tony Hoare's "billion dollar mistake".
But implementing the idea is full of pitfalls, e.g., Null by itself should not be an expression.

is the terminator to mark the end of a top level construct.
An Expression◅aType▻ is an expression which when executed returns aType.

In an expression,
1. followed by an expression for a nullable returns the Actor in the nullable or
throws an exception iff it is null.
2. Nullable followed by an expression for an Actors returns a nullable with the Actor.
3. Null followed by aType returns a Nullable◅aType▻.
4. ⦾? followed by an expression for a nullable returns True iff it is not null.

Illustrations:
* 3▮ is equivalent to Nullable 3▮
* Null Integer▮ throws an exception
* True is equivalent to ⦾?Nullable 3▮
* False is equivalent to ⦾?Null Integer▮

In a pattern:
1. followed by a pattern matches a Nullable◅aType▻ iff
it is non-null and the Actor it contains matches the pattern.
2. Null matches a Nullable◅aType▻ iff it is null

Illustrations:
* The pattern ⦾x matches Nullable 3 , binding x to 3
* The pattern Null matches NullInteger

Edited for clarity

Eric Lippert's Sharp Regrets

In an article for InformIT, Eric Lippert runs down his "bottom 10" C# language design decisions:

When I was on the C# design team, several times a year we would have "meet the team" events at conferences, where we would take questions from C# enthusiasts. Probably the most common question we consistently got was "Are there any language design decisions that you now regret?" and my answer is "Good heavens, yes!"

This article presents my "bottom 10" list of features in C# that I wish had been designed differently, with the lessons we can learn about language design from each decision.

The "lessons learned in retrospect" for each one are nicely done.

Big questions

So, I've been (re)reading Hamming /The Are of Doing Science and Engineering/, which includes the famous talk "you and your research". That's the one where he recommends thinking about the big questions in your field. So here's one that we haven't talked about in awhile. It seems clear that more and more things are being automated, machine learning is improving, systems are becoming harder to tinker with, and so on. So for how long are we going to be programming in ways similar to those we are used to, which have been with us essentially since the dawn of computing? Clearly, some people will be programming as long as there are computers. But is the number of people churning code going to remain significant? In five years - of course. Ten? Most likely. Fifteen - I am not so sure. Twenty? I have no idea.

One thing I am sure of: as long as programming remains something many people do, there will be debates about static type checking.

Update: To put this in perspective - LtU turned fifteen last month. Wow.

Update 2: Take the poll!

How long will we still be programming
 
pollcode.com free polls
XML feed