LtU Forum

BrightScript (Just what we needed: yet another scripting language)

I say that somewhat tongue-in-cheek.

BrightScript [Roku] is a powerful bytecode interpreted scripting language optimized for embedded devices; in this way it is unique. For example, BrightScript and the BrightScript Component architecture are written in 100% C for speed, efficiency, and portability. BrightScript makes extensive use of the "integer" type (since many embedded processors don't have floating point units). This is different from languages like JavaScript where a number is always a float. BrightScript numbers are only floats when necessary.

Languages for SIMT Architectures

I think SIMT is going to be added to the traditional CPU, much like SIMD already has been. In any case SIMT already has a foothold in the GPGPU architectures which are SIMT.

SIMT is where a single instruction is executed by multiple threads. It differs from SIMD in that each execution unit has its own register set and control flow, but shares instruction fetch and decode, whereas in SIMD registers and control flow are shared. This needs special care to deal with conditional branches. Generally only if/then/else or switch style conditional branches are allowed and as all instructions are shared by all SIMT units, they have to ignore the instructions in the branch they are not taking. In the case of loops you can have early termination, but all units wait for the last one to exit the loop (effectively instructions still get fed to the units, but they are ignored). This reminds me somewhat of the conditional execution of any instruction in the original ARM instruction set - however I think this was dropped in ARM64 in favour of traditional branches because branch prediction results in faster execution in the (more common) single threaded case.

I am interested in languages and compilers designed to take advantage of SIMT architectures, and that may make efficient programming for such architectures easier for the programmer and compiler technologies that may make things more efficient for the processor to execute. I wanted to start this topic to see what has been done in this field, and collect links to relevant papers. Please post if you know of any, come across any, or have any ideas for how to handle SIMT.

Typed Data

While things are quiet seems like a good time for a question with a tenuous link to programming languages. Unix has been designed around the idea of plaintext as a universal interface between programs. It is not the only possible universal interface; there are many possible binary formats with strictly defined syntax and a flexible definition of semantics.

What previous attempts have been made to make a unix (posix) core that operates on a different interface?
- Replacements for the standard set of tools (awk, sed, grep, cut, paste etc)
- A replacement editor for serialised streams (e.g. the equivalent of a text editor)
- Changes to the runtime interface in libc to replace stdin/stdout and read/write etc

I am aware of Microsoft's work on powershell, but replacing the textual interface with objects introduces a complex set of semantics. What I am asking is if anybody has tried something simpler? In particular - a typed representation for data, that may look something like an Algebraic Data Type. A fixed binary encoding for:
- integers
- floats
- strings
- a list type structure
- a form of nesting to allow trees

The rough idea here would be to take something as expressive as s-exprs, but combine it with a fixed binary encoding, probably a chunk-based format like the old IFF syntax from the Amiga file-system. This idea keeps floating back up every year when I teach the posix core to students. Unfortunately it does not suggest a project that I can give to a student (too much programming, very weak / undefined comparison with previous work). Some day - I may sit down and start hacking, but before then it seems interesting to point out that this is quite an obvious departure point from the unix design philosophy - has anybody been down this road before?

SHErrLoc: Diagnosing Type Errors with Class

It has been mentioned several times here and there in LtU discussion comments, but search didn't show it as a full topic. Which seems sad to me since this is just mind blowingly awesome stuff to me. Here's work that takes something in the UX realm and applies big brains to it and gives really measurable improvements.

Diagnosing Type Errors with Class:

Type inference engines often give terrible error messages, and the more sophisticated the type system the worse the problem. is possible to identify the most likely source of the type error, rather than the first source that the inference engine trips over.

...Using a large corpus of Haskell programs, we show that this error localization technique is practical and significantly improves accuracy over the state of the art.

Nesting imperative into functional code

Did you ever try to do that? Nesting functional into imperative code is easy, you just do the FP magic when you call the function. But how to do it the other way around?

Assume that you have a functional code as a base and some imperative code here and there, maybe even stored as strings in variables. Now you have to call something like Javascript "eval" when you want to run the imperative code. But when to do it? Note that imperative code doesn't have to be in a form of a function that returns some value. It can setup some event actions or mess around with some shareable memory states.

When to call such an imperative processes? I guess some event would apply, but what?

In fact I'm working on a functional language that can easily translate any abstract syntax tree (AST) into another AST. The language is interpreted inside javascript (maybe I'll try to do a native version in the future) and in a runtime of interpreting it I want to occasionally translate arbitrary imperative language AST into javascript AST (or assembler AST in the future) and execute it. The problem is how and when to trigger this execution. Maybe someone could help me with this problem?

The impact of syntax colouring on program comprehension

This paper fits into the discussion about being more scientific about programming language design, or in this case editor design. I particularly liked figure 6.

The impact of syntax colouring on program comprehension


We present an empirical study investigating the effect of syntax highlighting on program
comprehension and its interaction with programming experience. Quantitative data was captured
from 10 human subjects using an eye tracker during a controlled, randomised, within-subjects
study. We observe that syntax highlighting significantly improves task completion time, and that
this effect becomes weaker with an increase in programming experience.

Hacker’s Brain – The Psychology of Programming

A blog post on how programmers think. Tidbits:

Everybody in this country should learn to program a computer… because it teaches you how to think.

Nobody would argue with that.

This may be why brain imaging of programmers shows that programming relies more heavily on the language centres of the brain versus the areas associated with maths. Programmers will rely on the ventral lateral prefrontal cortex, alongside their working memory.

Something that will debated forever. That programming is more about language (words and such) than math is why I believe OOP continues to be so successful. Programming simply isn't math.

All this happened in my head. When I got home, I simply wrote out the code, copy and pasted some parts I couldn’t remember and ironed out the bugs until it worked smoothly. The actual hard part happened away from the computer in this case. You even ‘run’ the programs in your brain by following the logic through step by step to see if it would work – you can this way flag up bugs (gotta’ catch ‘em all!) before actually creating the code.

Programming environments aren't really optimized for thinking. It would be interesting to see if "thinking" and "programming" could be combined for a more augmented experience.

We use our working memory to store information, so when you’re imaging what a line of code does, you have to store the variables and the ideas you’re testing out, there. So when you’re thinking of a sequence of events, you need to keep the line of logical reasoning held in your working memory – which is where the similarity to math comes in.


Here’s also where things get more interesting still: because the brain areas involved with abstract thought are actually the same as those associated with verbal semantic processing (1). But then of course it does – language represents ideas just as code represents logic.

True, I wonder if our own internal working memory can be reflected back into the computer...again augmentation...this is the main point of live programming.

It’s probably down to the very immediate feedback loop that coding provides: the ability to test what you’ve written and immediately see the results is incredibly gratifying. Dopamine is the neurotransmitter that our brain releases in anticipation of reward and I’d argue coders have got dopamine to spare. This is the definition of ‘the work being its own reward’.

Again, there is great value in even better more immediate feedback loops. Feed the Dopamine! And ya, lots of caffeine.

Neural Programmer-Interpreters

A new paper from Google deep mind; abstract:

We propose the neural programmer-interpreter (NPI): a recurrent and compositional neural network that learns to represent and execute programs. NPI has three learnable components: a task-agnostic recurrent core, a persistent key-value program memory, and domain-specific encoders that enable a single NPI to operate in multiple perceptually diverse environments with distinct affordances. By learning to compose lower-level programs to express higher-level programs, NPI reduces sample complexity and increases generalization ability compared to sequence-to-sequence LSTMs. The program memory allows efficient learning of additional tasks by building on existing programs. NPI can also harness the environment (e.g. a scratch pad with read-write pointers) to cache intermediate results of computation, lessening the long-term memory burden on recurrent hidden units. In this work we train the NPI with fully-supervised execution traces; each program has example sequences of calls to the immediate subprograms conditioned on the input. Rather than training on a huge number of relatively weak labels, NPI learns from a small number of rich examples. We demonstrate the capability of our model to learn several types of compositional programs: addition, sorting, and canonicalizing 3D models. Furthermore, a single NPI learns to execute these programs and all 21 associated subprograms.

Seems much closer than before!

Logical and Functional Programming in Each Other

I can easily see how to model functional programs in a logical language. But is there a simple and straightforward way to do the reverse? And when I say "logical language", I'm not even looking for one with backtracking.

Interesting experiment in peer-review

There is an interesting experiment in public peer-review occuring over on reddit. The topic is adaptive compilation, which may be of interest to some LtU readers. Although most people read both sites the link was in the relatively high-volume /r/programming so was quite easy to miss.

XML feed