LtU Forum

Hacker’s Brain – The Psychology of Programming

A blog post on how programmers think. Tidbits:

Everybody in this country should learn to program a computer… because it teaches you how to think.

Nobody would argue with that.

This may be why brain imaging of programmers shows that programming relies more heavily on the language centres of the brain versus the areas associated with maths. Programmers will rely on the ventral lateral prefrontal cortex, alongside their working memory.

Something that will debated forever. That programming is more about language (words and such) than math is why I believe OOP continues to be so successful. Programming simply isn't math.

All this happened in my head. When I got home, I simply wrote out the code, copy and pasted some parts I couldn’t remember and ironed out the bugs until it worked smoothly. The actual hard part happened away from the computer in this case. You even ‘run’ the programs in your brain by following the logic through step by step to see if it would work – you can this way flag up bugs (gotta’ catch ‘em all!) before actually creating the code.

Programming environments aren't really optimized for thinking. It would be interesting to see if "thinking" and "programming" could be combined for a more augmented experience.

We use our working memory to store information, so when you’re imaging what a line of code does, you have to store the variables and the ideas you’re testing out, there. So when you’re thinking of a sequence of events, you need to keep the line of logical reasoning held in your working memory – which is where the similarity to math comes in.


Here’s also where things get more interesting still: because the brain areas involved with abstract thought are actually the same as those associated with verbal semantic processing (1). But then of course it does – language represents ideas just as code represents logic.

True, I wonder if our own internal working memory can be reflected back into the computer...again augmentation...this is the main point of live programming.

It’s probably down to the very immediate feedback loop that coding provides: the ability to test what you’ve written and immediately see the results is incredibly gratifying. Dopamine is the neurotransmitter that our brain releases in anticipation of reward and I’d argue coders have got dopamine to spare. This is the definition of ‘the work being its own reward’.

Again, there is great value in even better more immediate feedback loops. Feed the Dopamine! And ya, lots of caffeine.

Neural Programmer-Interpreters

A new paper from Google deep mind; abstract:

We propose the neural programmer-interpreter (NPI): a recurrent and compositional neural network that learns to represent and execute programs. NPI has three learnable components: a task-agnostic recurrent core, a persistent key-value program memory, and domain-specific encoders that enable a single NPI to operate in multiple perceptually diverse environments with distinct affordances. By learning to compose lower-level programs to express higher-level programs, NPI reduces sample complexity and increases generalization ability compared to sequence-to-sequence LSTMs. The program memory allows efficient learning of additional tasks by building on existing programs. NPI can also harness the environment (e.g. a scratch pad with read-write pointers) to cache intermediate results of computation, lessening the long-term memory burden on recurrent hidden units. In this work we train the NPI with fully-supervised execution traces; each program has example sequences of calls to the immediate subprograms conditioned on the input. Rather than training on a huge number of relatively weak labels, NPI learns from a small number of rich examples. We demonstrate the capability of our model to learn several types of compositional programs: addition, sorting, and canonicalizing 3D models. Furthermore, a single NPI learns to execute these programs and all 21 associated subprograms.

Seems much closer than before!

Logical and Functional Programming in Each Other

I can easily see how to model functional programs in a logical language. But is there a simple and straightforward way to do the reverse? And when I say "logical language", I'm not even looking for one with backtracking.

Interesting experiment in peer-review

There is an interesting experiment in public peer-review occuring over on reddit. The topic is adaptive compilation, which may be of interest to some LtU readers. Although most people read both sites the link was in the relatively high-volume /r/programming so was quite easy to miss.

Rumors in Complexity Theory

I imagine everybody already has read this, and that there's nothing to discuss really. Nothing to do but wait, but I think it should be an LtU post anyway.

For days, rumors about the biggest advance in years in so-called complexity theory have been lighting up the Internet. That’s only fitting, as the breakthrough involves comparing networks just like researchers’ webs of online connections. László Babai, a mathematician and computer scientist at the University of Chicago in Illinois, has developed a mathematical recipe or "algorithm" that supposedly can take two networks—no matter how big and tangled—and tell whether they are, in fact, the same, in far fewer steps than the previous best algorithm.

From Science Magazine.

Andrei's answer to "Which language has the brightest future in replacement of C between D, Go and Rust? And Why?"

Excellent critique of potential C/C++ successors (from the author of D, also top of /r/programming ATM):

Money quote:

A disharmonic personality. Reading any amount of Rust code evokes the joke "friends don't let friends skip leg day" and the comic imagery of men with hulky torsos resting on skinny legs. Rust puts safe, precise memory management front and center of everything. Unfortunately, that's seldom the problem domain, which means a large fraction of the thinking and coding are dedicated to essentially a clerical job (which GC languages actually automate out of sight). Safe, deterministic memory reclamation is a hard problem, but is not the only problem or even the most important problem in a program.

Google Open Sources Skynet

Well. Not that exactly. But a product interesting in its own right.

TensorFlow™is an open source software library for numerical computation using data flow graphs. Nodes in the graph represent mathematical operations, while the graph edges represent the multidimensional data arrays (tensors) communicated between them. The flexible architecture allows you to deploy computation to one or more CPUs or GPUs in a desktop, server, or mobile device with a single API. TensorFlow was originally developed by researchers and engineers working on the Google Brain Team within Google's Machine Intelligence research organization for the purposes of conducting machine learning and deep neural networks research, but the system is general enough to be applicable in a wide variety of other domains as well.

The site is light on information. It shows off a visual programming tool with a backend in Python/C++, so that seems in line with current interest on LtU.

I am interested in data center applications these days, not sure how it does its magic within Google's centers.

MCG: A Visual Functional Programming Language

Hi everyone,

It's been a while! Some of you might remember me posting here a few years ago about my adventures with type systems and stack-based languages. For the last couple of years I have been head's down working on a visual functional programming language at Autodesk. The language is called MCG (Max Creation Graph) and is part of the commercial 3D animation, modeling and rendering software package 3ds Max. I've written a blog post about MCG that introduces the language to a technically savvy audience.

As some of you may know visual programming languages are quite commonplace in 3D software packages. A few examples include: Houdini, Grasshopper, Softimage ICE, Fabric Engine Canvas, Dynamo, NUKE Compositing, and more. However, I can't find much mention of these languages in the literature.

Switching from designing programming languages as a hobby to doing it professionally has been very interesting! I found myself spending a lot less time consulting the literature and spending more of time responding to customer needs and feedback. Now that MCG is shipped and being used, I'm interested in reconnecting with the academic community. I am wondering if anyone has some suggestion about what aspect of MCG might be of the most interest for researchers, and what type of publication I should pursue (e.g. technical report, experience report, research paper, or simply stick to non-academic publications). Any tips or suggestions would be most welcome! I'm also interested in exploring potential collaborations with researchers so please reach out to me if this is something that might interest you.


How Useful is Erlang Hot-Swapping of Code?

Please discuss. I am interested.

minor lexical tokenization idea via character synonyms

A few days ago I had a tokenization-stage lexical idea that seems not to suck after going over it several times. On the off chance at least one person has similar taste, I can kill a few minutes and describe it. Generally I don't care much about syntax, so this sort of thing seems irrelevant to me most of the time. So I'll try to phrase this for folks of similar disposition (not caring at all about nuances in trivial syntax differences).

The idea applies most to a language like Lisp that over-uses tokens like parens with high frequency. I'm okay with lots of parens, but it drives a lot of people crazy, apparently hitting some threshold that says gibberish to some folks. Extreme uniformity of syntax loses an opportunity to imply useful semantics with varied appearance. Some of the latent utility in human visual system is missed by making everything look the same. A few dialects of Lisp let you substitute different characters, which are still understood as meaning the same thing, but letting you show a bit more organization. I was thinking about taking this further, where a lot of characters might act as substitutes to imply different things.

The sorts of things you might want to imply might include:

  • immutable data
  • critical sections
  • delayed evaluation
  • symbol binding
  • semantic or module domain

A person familiar with both language and codebase might read detail into code that isn't obvious to others, but you might want to imply such extra detail by how things look. Actually checking those things were true by code analysis would be an added bonus.

I'm happy using ascii alone for code, and I don't care about utf8, but it would not hurt to include a broader range of characters, especially if you planned on using a browser as a principle means of viewing code in some context. When seen in an ascii-only editor, it would be good enough to use character entities when you wanted to preserve all detail. It occurred to me that a lexical scan would have little trouble consuming both character entities and utf8 without getting confused or slowing much unless used very heavily. You'd be able to say "when you see this, it's basically the same as a left paren" but with a bit of extra associated state to imply the class of this alternate appearance. (Then later you might render that class in different ways, depending on where code will be seen or stored.)

A diehard fan of old school syntax would be able to see all the variants as instances of the one-size-fits-all character assignments. But newbies would see more structure implied by use of varying lexical syntax. It seems easy to do without making code complex or slow, if you approach it at the level of tokenization, at the cost of slightly more lookahead in spots. As a side benefit, if you didn't want to support Unicode, you'd have a preferred way of forcing everything into char entity encoding when you wanted to look at plain text.

Note I think this is only slightly interesting. I apologize for not wanting to discuss character set nuances in any detail. Only the lossless conversion to and from alternatives with different benefits in varying contexts is interesting to me, not the specific details. The idea of having more things to pattern match visually was the appealing part.

XML feed