alternate basic models of framing code behavior and purpose?

(This first sentence would have been a rhetorical question, because those are short and clear, but usually someone thinks I want a question answered, like I'm asking a homework question. So the next paragraph begins with the question. But it's rhetorical. It's only meant to frame a topic clearly as the top point of a reverse pyramid. Questions are good for that. It might have something to do with stupid headline style that inspires Betteridge's law of headlines.)

Do you have alternate ways of framing descriptions of program behavior with better results (clarity, concision, directness, etc.) in some domain, or class of application, which simplifies or untangles the pattern of desired activity? I ask for a couple reasons. First, there's not much traffic and I hope to spur a small amount of discussion. Second, I'm cooking up a fairly inchoate idea I might try to describe under this general heading. Rather than devote a topic to just my crazy idea, the general idea seems more interesting: how you describe a problem affects how easy it is to solve.

It's sort of an invitation to compare and contrast how you model what happens in code, including cliché paradigms like object-oriented or functional. But marketing prose for old stuff isn't what I hope to hear. (And I may write a dialog mocking a market pitch if really blatant. :-) Instead I hope to hear about less conventional ideas, whose explanation have an odd or surprising quality (even if it means half-baked and not obviously useful). This implies a weird ontology, if a programming language revolves around an unorthodox idea, because a programmer must think in terms of that model to write code. I anticipate no responses, though, except for "everyone must start using actors now", but a happy surprise would be nice.

I'll take a shot at describing my half-formed idea in a post later. But it hasn't gelled at all, and I'm even having trouble finding words that tend to imply the right thing. It involves viewing code as usually waiting for something to happen. For example, in a UI, menu items wait for you to select them, and a command line waits for you to enter another command. Installed programs wait for you to use them, and functions in a library wait for you to call them. Servers wait for requests, and network cards wait for packets. Writing code generally involves editing what is ready to react to stimulus in some form of input, usually with an aim to generate output or cause effects. But the word wait is a bad fit because it implies active waiting, with a purposive (almost teleological) framing that doesn't include the necessary aspect of passive option. Starting from option seems equally valid, and other words too perhaps. There's more, but just as unclear so far.

Edit: An idea which doesn't gel can sound tautological, making you ask: What is the point? Exploring a thread can cause every loose end to dry up so you only know what you knew before, that you merely tried an awkward angle.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Live programming

The goal of live programming is to moving the problem solving model from outside of your head to inside the programming environment. How we write code is almost as important, if not more so, than the qualities of the code that we eventually write, and we focus too much on the linguistic properties of the code the programmer writes to solve a problem, and not enough on how we get there in the first place! Reading code is nice, but we have to write it and make it work also.

So consider an abstraction: why would the programmer want to use it, and how can the programming environment help the programmer to decide when they need to wait? How can the programmer express what they want to do independently of how it is done (declarative programming!), so that the computer can help them figure out how? Also consider that I can encode a lighting or physics equation in my program; but how could I go about deriving them in my programming express something new? Can we do what we would otherwise sketch out on a whiteboard, solve equations, explore with Mathematica, into the core programming environment itself?

By running code while we are editing it, it is no longer a passive entity, it is doing something, we can observe that. We can create a tight feedback loop where we try something out and observe its effect (a REPL), except not just at one line at a time.

Devil's advocate

Are good abstractions really going to arise out of building an example and then abstracting some variables? There have recently been studies in the news reporting that technology in the classroom is correlated with lower standardized test scores, and I personally find that my most important advances are made away from the computer. I suspect that the big ideas about modeling and algorithms are going to continue to be this way (or, at least, with focused thought in front of the computer rather than feedback oriented). Better feedback and tooling seems important as a constant factor improvement in building software rather than a change in the Big-Oh.


Wasn't it dijkstra who said students were spoiled by using interactive terminals rather than thinking about their programs carefully before they entered them onto punchcards? How would you rate the move from punchcards to interactive terminals in making programming "better"? A constant factor?

But it's not really related to this topic. One could imagine an app on the computer that was separate from the programming environment that was useful in brainstorming, note taking, writing presentation, writing papers, sketching wire frames, and such. Computer assisted thinking is already here, it just isn't very deep into our programming environments for some reason. Many programmers for some odd reason tend to be anti-computer in this respect.

it is a poor craftsman who blames their tools

because they'd have made better tools. not because they would stick with really crappy ones.

a genius is probably going to be even better with a great tool than with an old crappy one, even if they can out-do us all even with the crappy one.


so yeah people need to know how to think first, that is a great valid point worth stressing. but i don't think that should somehow then prove we never need better regular tooling.

the BEST tooling would be toward helping the THINKING of course!

In the words of the famous

In the words of the famous computer scientist Obi-Wan, "Use the Force, Luke." He then turned off his targeting computer and was met with "Luke, you switched off your targeting computer! What's wrong?" Using a computer to hit a target is for woosies. And it worked too! Link.

But in the real world, I bet the targeting computer would have been more useful, and anyways there are no midichlorians to give us special psi powers. Cybernetically enhanced thinking and problem solving is probably our future.


In the real world they dance upon the surface of the caffeine molecule, and those who are receptive can channel their advice through copious amounts of strong coffee.

Do computers, not drugs

Caffeine, nicotine, LSD, ... are all wrong ways to intellectual enlightenment ("wrong" in varying degrees of course). Interfacing with a computer can be much healthier.

But... caffeine and nicotine

But... caffeine and nicotine are two of the steps on the eight-fold path to releasing your programs from the samsara of debugging.

Stages, Rates, and Worlds

I mostly work with GPU graphics pipelines. The relevant thing about this domain is that we often need to think about computations that occur at different times (per-frame, load-time, or offline asset compilation) and places (CPU-vs-GPU, as well as the many stages of the hardware rendering pipeline).

My colleagues are graphics researchers, when I see the programs they produce (I've made a few of these myself) they almost invariably have a main rendering view of the algorithm/effect/technique being developed, and then a "control panel" full of checkboxes, sliders, and other widgets to toggle between variations of the technique.

The main concepts in play, then, are a notion of spatio-temporal "rates," along with a variety of "features" that will be developed, combined, and toggled on a regular basis.

The traditional way to approach this domain involves first decomposing things by rate, into procedures that run at each of the distinct times/places involved (e.g., you write a "vertex shader" and a "fragment shader" as distinct procedures, because they run at different points in the HW pipeline flow), along with type definitions for the data that flows between rates. Once that primary decomposition is done, each of the checkbox "features" typically turns into an `#ifdef` in the GPU code (usually just an `if` on CPU).

The alternate (and my preferred) approach is to decompose things first by feature. This requires that each feature be able to describe computations at various spatio-temporal rates, and also requires rules for how features interact when composed. Each feature ends up looking like a small end-to-end pipeline, so they are sometimes called "pipeline shaders."

This worldview isn't original to graphics, of course. The subsumption architecture in robotics had previously championed the idea of composing many small end-to-end pipelines instead of authoring each stage independently. I've recently found that the modal type system used for "worlds" in ML5 (to describe distributed programs) is a very good fit for the notion of "rates" in my own pipeline shader work.

I'm not entirely sure if this is along the lines of what the prompt was looking for, but to me it seemed relevant.

I think the graphics people

I think the graphics people have been out front on this for awhile: they have a domain that is incredibly visual, so can provide good feedback, and also have much more need for tweaking and tuning (and nice scrubable values to get there!).

I've had some success with the subsumption architecture, but it does have its limitations, especially when you aren't controlling a bunch of robots or virtual robots (as in a game). Chris Granger mentioned that they tried it out as well.


Yes, I would definitely not claim that subsumption is a good universal model for how to author software. It is uniquely beneficial when you have something that you know *should* be a pure function ("what voltage should I send to these motors given the sensor input I see right now, and the state I've recorded?"), but actually writing a closed-form function a priori is intractably difficult. When just writing the closed-form function is tractable, it will typically be faster to just do it; you wouldn't want to use subsumption to write quicksort.

(Aside: machine learning is starting to provide another viable approach in this space (e.g., deep convolutional neural nets), with the down-side that they are not currently easy to debug or fine-tune.)

To be clear, though, my goal wasn't as much to argue for subsumption (although it is *really* useful in my domain), but for thinking more overtly about space-time rates (stages/phases/worlds/what-have-you).

particular places and times

I like that your model includes where and when aspects, because a good result requires planning context of computation. Is it mainly a matter of efficient resource scheduling? I guess also because deliverables have a time target after which they get stale.

Mostly just performance

The main reason anybody uses a GPU is for performance. Sometimes this is a soft or hard real-time constraint (e.g., virtual reality) so that, as you say, an answer delivered too late is stale, and cannot be used.

As a result, in graphics we will often prefer to make approximations (give a "wrong" answer) rather than take a longer time to give the "right" answer. The decision of where to run something in the pipeline (e.g., per-vertex vs. per-pixel) is usually a performance-vs-quality decision, since a triangle of 3 vertices will typically cover many more than 3 pixels, so per-pixel evaluation gives a more dense sampling (more expensive, but higher quality).

Functional modeling

Has someone done a functional modeling of the render pipeline that can be compiled to efficient GPU code? e.g. a model is a set of triangles with some associated material property functions. When you try to render such a model in some context, those functions end up getting invoked per fragment. This render context would itself be modeled functionally. Shifting between "rates" would be modeled by specifying data at one set of indicies in terms of another (e.g. lirp between vertex data to get fragment data). This is how I imagine wanting a GL to work and you seem like a good person to ask.

I don't understand. The

I don't understand. The render pipeline was functional and still mostly is.

Or maybe you are looking for hull shaders?

Functional GPU stuff

There's a lot of work that has attempted stuff in this basic direction.

Conal Elliot's Vertigo ("Programming Graphics Processors Functionally") is a very clean and well-reasoned formulation, as you might expect. It is primarily aimed at procedural effects, so it would require extensions to apply to more general cases (e.g., it requires geometry to be parametric surfaces, rather than arbitrary triangle meshes).

Chad Austin's Renaissance ("Renaissance: a Functional Shading Language") uses a ML/Haskell style purely functional language for describing shaders. It is based on the idea that computations are per-fragment by default, but the compiler is allowed to hoist certain computations to the vertex stage (e.g., linear math).

The GPipe package for Haskell models the graphics pipeline fairly closely (e.g., there is a "rasterize" function that maps a stream of primitives to a stream of fragments). The fragment stream type is a functor, and thus shaders are defined by mapping operations over it.

My own system was called Spark, and while it wasn't trying to be functional, the top layer of the language is declarative, and can sort of be seen as a simple (first-order) functional language. Transitioning data between rates was modeled explicitly as "plumbing operators" which could do exactly the kind of things you ask for (describe that a value at one rate should be computed by linearly interpolated values from another rate). Plumbing operators could be either system- or user-defined, and could be applied either explicit, or implicitly (when qualified with a Scala-like "implicit" keyword).

At the risk of tooting my own horn: the main novelty of my system was that by modeling inter-rate conversion more explicitly, I was able to target the more complicated stages of the pipeline (tessellation and geometry shaders) that most research systems hadn't (and still don't) tackle.

does Nile count?

as seen previously on LtU, Nile tries to 'reduce' (in a good way) a graphics pipeline to a mathy language.

Great, thanks

Thanks for the overview. GPipe and Spark sound the closest to what I was imagining. I've already read some of your thesis. Looks interesting.

I actually had some

I actually had some discussion with Conal about targeting geometry shaders functionally. The big problem there, however, is that geometry shaders are not functional, they are the first stage of the graphics pipeline that is overtly procedural. Tessellation is again functional, would it be really hard to support them in something like Vertigo?

GS: hard, Tessellation: maybe not

I think you are right on both counts.

So long as you are modeling the pipeline stages as explicit functions, you could add tessellation to Vertigo without too much overt complexity. In that sense you'd have tessellation exposed no better/worse than HLSL, GLSL, etc.

My statements should have been more narrowly scoped to systems that use rates/streams/staging/what-have-you to allow developers to easily write programs that operate across or abstract over multiple pipeline stages (that is: systems that start from the assumption that the HLSL/GLSL approach isn't good enough).

At what point would you just

At what point would you just be using a compute shader anyways to define your own pipeline?

Possibly soon, possibly never

The point where this is practical (from a raw performance standpoint) for a 1080p game at 30 or 60fps may be upon us soon. One of the big moments for game dev folks at SIGGRAPH was when Media Molecule talked about their almost entirely compute-based rendering system for their upcoming game Dreams.

The catch is that, all other factors being equal, a renderer that makes good use of the fixed-function bits of the pipeline should always be more power-efficient than one that only uses the general-purpose compute parts.

This same logic should apply to any case where an SoC architecture includes fixed-function or specialized hardware for particular tasks. Graphics is just one example.

That is quite interesting

That is quite interesting since we've already gone from fixed function to programmable function but fixed pipeline. It just seems that we'll eventually figure out how to have programmable function and programmable pipeline without such a big cost.

makes me think of soa

It's about the input space, not the code

Preamble: My obsession lately is communicating the global structure of programs. That seems more important than correctness or hygiene at any single point in time, because projects gain and lose people over time. If newcomers are able to gradually learn everything about a project as well as the original authors, we accumulate gains in correctness and hygiene and agility. But as long as we keep losing knowledge with people it seems hard to maintain a monotonic accumulation of gains. Result: projects slowing down over time (simplistically because new people need more oversight than original authors), large projects end up having "good" and "bad" versions (simplistically because the pendulum of attention oscillates between delivering new features and improving stability of existing features), etc. Hypothesis: codebases are hard to communicate because there's knowledge in people's heads that can't be reconstructed from the artifacts we currently capture.

In this context, I've been finding it very useful to focus on the input space a program is intended to support. The value in a program lies not in the lines of the code, but in what it does in different situations. Storing just code in a codebase is misleading because a small visible change (say adding a parameter to a function) can cause the invisible size of the input space -- and therefore the program's and its owners' responsibilities -- to multiply. I try to fix this in two ways:

a) Expanding the power of automatic testing. Tests are great because they help make the invisible input space visible as code. But we can't write tests for all the ways we care about our programs. I want to be able to write tests for performance, fault tolerance, race conditions, what happens when the disk fills up, and so on. This is unnecessarily hard because the foundations of our stack were designed in the 70s when testability wasn't understood as a primary design constraint. If we had an arsenal of fakes for concepts at all levels of abstraction in the software stack it would be easier to simulate the disk filling up, network behavior, race conditions, and so on. Modestly, my goal is to rethink the coevolution of language and OS given what we know now about testability (and white-box testing, higher-order functions, closures, first-class continuations, hindley-milner type systems, lisp macros, ...)

b) Expanding the power of version control. After tests, I find version history the most useful in grappling with a strange new codebase. Rather than look at the code as it is now (which is likely the most complex it's ever been) I look for a simpler version that is easier to understand before "moving forward in time". Formalizing this practice has led to a very alien way of organizing programs in cleaned-up autobiographical order using literate programming that allows newcomers to learn a new codebase in successive stages, each dealing with a working program that passes its tests and is a little more complex than the previous stage.

More details: Lately I've been focusing on using it to teach programming 1-on-1. Conveying programming to non-programmers seems to have a lot in common with conveying codebases to programmers.

situation needing code response

That insight about global structure seems quite valuable. (Suddenly you're very interesting.) You remind me of my biggest peeve about missing minimal code docs to explain anticipated context characterizing situations expected to obtain when it will matter what code does in response. Code says what it does, but doesn't say what facts you believe are true elsewhere such that it matters what code does. Important relationships with code elsewhere seldom get called out as important to grasp motivation for local effects. Code can operate like falling dominoes, where it's hard to perceive how parts hit each other in sequence, and where the trail is going.


Very interesting, thanks.

The connection to version-control seems rather powerful and interesting. The fact that it comes with a proposed invariant (that removing a feature passes the non-feature-related tests) is helpful.

I'm not that convinced in the specific, aspect-oriented-ish interface you propose. I find that I would rather have the whole program in view, with enough annotation to understand (for humans and computers) the layering, and ability to fold/hide unrelated features for program comprehension purposes. To harp on the version-control analogy, rather than manipulating an empowered patch series, I would rather be supported by an empowered blame tool.

There is more to be said about which invariants the system should respect, than just "removing a feature should not break the unrelated tests". What is the nature of the dependencies between features? Intuitively I would like to be able to claim that two features are independent, and have static checks / linting warn me if a change in one of the layer affects another that is not related. This reminds me of the work on using dependency analysis to detect semantic merge conflicts, furthering the version-control correspondence.

Yes, I'm more certain about the problem than my solution

> I would rather have the whole program in view, with enough annotation to understand (for humans and computers) the layering..

Yes, the current system has some tools to inspect the output of tangling because I have found it useful on occasion. It's on the roadmap to provide that ability in a nice way in the UI as well.

By the same token I spend a lot of time providing a nice way to "uncoil" a static program in a specific scenario and see the trace of actions it performs. Both are analogous abilities, and yet conventional stacks tend to deemphasize logging. For example, we build huge infrastructure for interactively stepping through programs and nothing to augment debug-by-print. I think it's a pervasive blind spot.

You never actually have the whole program in view, you only have a tiny peephole with which to probe at it, and you have to choose what groups of features you permit to show up on the peephole together. We're conditioned to think that "code inside a function" is the ground truth of a program, but that's not always the best approach.

> I would like to be able to claim that two features are independent, and have static checks / linting warn me if a change in one of the layer affects another that is not related.

Avoiding explicit dependencies between layers was basically a pragmatic decision to avoid the combinatorial explosion of testing all possible combinations of layers. Perhaps someone with different skills than me will come up with the perfect type system for this without causing combinatorial explosion. But even just testing layers 1, 1+2, 1+2+3 and so on is strictly superior to the current approach, where we only test all the layers together.


In general, my bias is to spend less time worrying about how the code looks to the reader and instead assume/design a tactile interactive scaffolding around it that will help provide something akin to depth perception feedback. Your visual cortext doesn't work nearly as well if your head is fixed in place and you can't make the slight movements to exclude many possible segmentations of a scene. But it doesn't have to work well in that situation because you almost always can move your head.

If two features are totally

If two features are totally unrelated you can put each one in its own module, and that enforces the separation syntactically. Or is that not what you want?

out of band communciation

I suspect that most of our approach to software development is so bad that just statically enforcing syntactic separation is probably insufficient.


I wish we were better at / had better ways to express equivalence classes. Dependent types? Things like QuickCheck are to me of that mindset. As are edge case tests, fuzz tests, acceptance tests (if they are done well, and that is not easy).

meandering notes on timing of changing code at runtime

The posts so far are quite interesting, which I appreciate very much. Before I comment elsewhere, I should expand my hinted idea a bit, but on the criticism side. (I alternate between selling myself an idea and then trashing it, so pros and cons are pursued with less bias. It's a bit like abstract destructive testing. At the moment I'm pursuing the "this adds nothing" angle.) Apologies for not having presented a summary overview, but I don't have one yet.

Most of what I was thinking can be reduced to analyzing roles of code and data without treating any of the patterns as interesting (though they seem to be). When code changes, and you move it around, this deals with code as data and amounts to metaprogramming. Installing code changes data structures making it visible for use, as an option, provided all the stars align. Programming languages in general deal with code as data in this sense, as the metaprogramming step of causing code to show up.

My concern to have more control over when is partly caused by the domain of application, where things keep changing in a context where you can't stop and reset everything to zero before beginning from known origin in lockstep. (This is both work and hobby domain. At work config must change without killing all outstanding traffic, and in hobby context you want to inject handlers dynamically, say from a browser, without restarting daemons.) Changing the shape of what can happen at runtime makes you more interested in when code can be reached, which imposes a view of code doing nothing until it is reached, and a mental figure-ground inversion can occur. Instead of thinking about what happens when called, you think about negative space when code is not called. But that's only relevant from a system perspective about things changing, which only matters when you can control it, implying you are not at the mercy of how an operating system dictates outer context (because something distributed happens).

When you think up a way for great flexibility to occur, suddenly you must worry about how resources will get controlled, because exponential expansion is easy to do by accident. Having code run in a nice steady state indefinitely might be the exception, compared to sudden termination or running out of control until memory or disk is gone. In other words, writing software cancer is easy, running amok as if resources are not constrained, if you can easily spawn processes, threads or fibers. The line between tracking to-do lists just enough and fork-bombing yourself might be hard to control. In normal code execution systems this is prevented by default when lifo stacks discard runtime resources by simply unwinding. Then if you never overflow the stack, nor hang by never returning to a caller, you are in the right ballpark already. So the old traditional model prevents some kinds of perverse error.

A scheme must exist to prevent spaghetti resource lifetime that can explode. Telling a coder "try to be careful" might not be enough, since they won't blame themselves.

I basically can't wind this down because I haven't thought it through enough. But there's a relevant angle in the operating system's role in telling you when things can happen, setting aside a cookie-cutter-sized hole for your code to run inside. When you consider what an OS is tasked with doing, you realize it has a miserable name, about baby-sitting hardware more than providing fluid options for software contracts. The shape of the input space is managed by the OS unless you get out from under it slightly by some tactic.

The shortest phrase I have so far for editing code availability is "pick shaping", where pick is a noun synonym for option, related to affordance except also including things that have not advertised themselves at all and thus fail to act like affordances. Not sure where I'm going with it.


I'm doing job interviews (see what is going on in the rest of the world) and one of the take-homes is to do some small "service manager" stuff, and support plugins, and invoking them concurrently, and make the code as "ideal" as possible (where "ideal" is probably a bit of a roarchach test).

Given the problem I can see that I should require 2 different thread pools. One would be sized by my code to avoid starving anybody, the other would be up to the caller to decide how they want to size it.

It all feels a little bit like voodoo and chicken blood when it comes to e.g. oh it worked on my machine, perhaps. You set the thread pool to be MAXINT size?! etc.

It would be great if we had 'language' to help reify these kinds of ways of looking at and designing code. Being able to only resort to doxygen stuff is an indictment.

re threads and language to look at code designs

I suppose ideal means you avoid making sloppy choices that over-specify something (actual usage will only look like this, not that). So: kinda abstract clean api, maybe with a shim near OS api that turns calls into actual native types.

I guess reserving a pool for private infrastructure sounds good to ensure you cannot run out of essential service capacity, when you plan how it will receive load and know you keep some capacity available.

Things that can block need their own thread, if it would otherwise make requests wait that can run independently. It's common to try writing services with as little blocking code as possible, with non-blocking sockets etc, so multiplexing connections can be done in userspace, within just one thread. The way it's done might be secret sauce in a company. (But secret does not require pretty.) Probably there's an open source way to do it, but I haven't looked, planning my own instead.

There's a way to host many fibers in one thread, servicing non-blocking i/o as traffic shows up in a trickle in just a few ports at any given time, so exhausting cpu is not easy to do unless you go nuts with encryption. But you can't easily use any code that blocks from a library that would otherwise be nice to re-use, developed by someone else.

To actually perform a blocking operation from such a context, you can make an async call serviced by another local thread pool devoted to executing code expected to block. But making code async in many languages is awkward, as you normally need to organize your code around it, often resulting in callback hell of some kind. But if you transpile to continuation passing style (CPS), you can manage to write source that looks like normal calls, appearing to block synchronously, that gets rewritten to async to do a lateral handoff to a thread pool (which awakens a fiber continuation on reply).

You could come up with a posix api that does that, but you'd need a reference implementation for others to port if they hoped to always be able to use it. Part of language to do that must include a model to keep in mind, with words people use to discuss detail. You end up with conventions like threads are native (which block) while fibers are lightweight userspace things (which park), and stuff like that.

I'm glad you like the topic. Probably other folks merely tolerate my fuzzy, informal, exploratory posts because they seem to lack focus in a hard-driving dog-eat-dog sense of academic competition. Broad things are possible, but mostly we talk about narrow stuff. I didn't mean to go on a thread/fiber tangent, and isn't needed to talk about the idea of alternate models.

To test out ideas, I've been imagining something like HyperCard running in one or more daemons, but it gets really messy when not single-threaded. That might be understatement: maybe more insanely hairy when event handlers can run in parallel, and can avoid terminating when plausible looking code is written. So I've been wondering about alternate ways to think about what's happening. While my own use cases are narrow, what if you told folks they could aim to do whatever they wanted? You can't really stop people from making messes, so I wondered what the messes would be like, if you casually strung things together without thinking much. Even if you put in lots of safety mechanisms to stop things from running out of bounds, it would be easy to turn off all the checks by making limits big. Anyway, the idea is to guide folks away from gibberish and toward bounded orderly models of some kind.

bounded orderly methods

Those are places I like to hear about, even if I never get to go there in person. Things like parallel skeletons, or actually theoretically correct approaches to eventual consistency are all possibly good straight jackets akin to strong poetical structure, in my mind. The Right Abstractions.

thank you for this topic

i love this stuff.

Reminds me of objects

Or rather, I was reminded of this while reading about objects. There is a nice summary of some of Alan Kay's contributions here. I can see a similarity between where Alan has reached for with his work and the idea that you are proposing.

What you describe as waiting, and reseting to stable points sounds alot like a system of late-binding objects, where the focus is on the protocol that is visible between them. "Visible protocols between tightly coupled clusters of functionality" is one possible answer for your question of what other ways do we have to see the global properties of code. Modules are another form of organisation to try to capture the same kinds of relationship.

Maybe there is some pump for your intuition in the link.

perspective from tools

I like your suggestion, and I want this to sound positive, but it's hard to run with, because Kay's material has been familiar to me since the 90's. (Actually the late 80's, but perhaps it took me a few years to hear it over and over and have bias fall from my eyes.) So I have trouble mustering a new-ish response that doesn't sound trite. I can give you a lateral-thinking reaction, though, based on associating some old things with each other, which might be interesting but seem stale.

I noticed the recent HN discussion of mythz's 2013 post (first discussed on HN in 2013), which is what I assume prompted your comment. Since I'm going to make a remark about Rick Meyers, you might want to see my longer 2006 post here on LtU, with headline smalltalk hindsight, where I provide more context than I should here.

By 1990 my favorite languages were Smalltalk, C++, and Scheme in that order. But I had not coded professionally in Smalltalk when I went to work at Apple full time in C++, on Pink, where they hated dynamic languages as a matter of policy and you could get sarcastic remarks about Objective-C at the drop of a hat, apparently because dynamic dispatch by selector was considered an artifact of toy languages. (They were pretty sure NeXT was going to crash and burn.) When I objected and asked what the problem was, all I got was eyeball-rolling. I like the extreme late binding Kay talks about. I wrote a lisp in the late 80's, and a smalltalk in the early 90's that also had some hybrid stuff from lisp.

Nearly every quote in mythz's post is old to me, but most seem far from a central concept needing pursuit, except perhaps the part about instant interactive feedback when exploring an idea. In the 90's I tested my C++ with an interactive Lisp interface (using tools I wrote entirely myself), usually in an environment like MPW (Macintosh Programmers Workshop), which was the text-oriented shell at Apple that simulated a Unix-style environment as a single Mac process that ran tools in pipelines that were actually (roughly speaking) coroutines scheduled in MPW like you would via userspace code in Linux. I believe the general MPW architecture was due to Rick Meyers, though Dan Smith (now Dan Keller) wrote the shell application.

(Meyers and Keller worked on Pink's development environment, and we used MPW in the build system, but text-oriented interfaces were met with extreme hostility by many Pink devs because everything was supposed to be direct manipulation, due to a "graphical UI rules" religion. To enjoy text-based language tools was tantamount to advocating the use of CP/M or DOS command lines, which was heresy of the worst kind. So MPW was regarded a necessary but temporary evil.)

I loved HyperCard's ability to open up a widget and show the script that runs when an event was received by that widget. I have always wanted this for every UI widget, so users can inspect how things work. (At Netscape this idea was met with typical hostility toward the idea; I think undiscoverable UI was considered cool at that time.) When I pitch this idea, typically I'm told it's "impossible" because a dev cannot imagine how binding would work, to relate what appears in a script to what runs in a backend in response. Not many devs seem to get the idea of how compilers and interpreters work, it seems, or else one cannot explain the cannot-be-done reaction.

I would like to have something that's like a bizarre hybrid of HyperCard, MPW, Unix shell, and web browser, so the answer to "what can happen?" is whatever you want. Unfortunately, some of the things you can do with that are horrific, like distributed denial of service scripts that would be hard to stop from spreading virally, unless you don't let users control how permission works (and I want to allow everything). So you can see where I get some "yes of course" attitude when reading Kay quotes like that.

Is there a specific Kay quote you want to talk about? Or a particular tool you want to hear riffs about? I could have just asked that alone, without all the rest above, but you wouldn't be able to guess where I was coming from. I need narrowing to make a topic discussible instead of broadening.

some follow-through summary

Having thought about it long enough, this is my summary: the interesting idea was imposed by the way I looked at things and asked questions. (This doesn't make the idea wrong, just not very external, and smacks of studying your own fingerprints as meaningful.)

But the idea of alternate basic models to frame behavior is still useful. I will resist writing a dialog, even though it conveys the issue better. One dev is obsessed with hammers, and has forty different words for fitting shingles together in patterns. He's a roofer, and thinks the devs interested in concrete and wiring are idiots because, you know, hammers.

Maybe what I'm missing is a standard terminology folks use to talk about different ontologies depending on the task and tools involved. We tend to use very similar words for quite different situations, and map the general idea to the local shape of problems, glossing over differences. (For example, what's a message? This includes all kinds of things, since any data hand-off can qualify, including subroutine calls.) So my original question is really one about interesting shapes of problem spaces that inspire good alternative views.

When we describe a problem (or design) with language and model tools, irrelevant things are abstracted away, since they don't matter. The concrete details of instantiating a version of code are accidental compared to essential model components. So when you ask "is it real?" the answer is usually yes, because you put it there, and it's manifest to the degree you can check. Whether when and where aspects are significant depends on your intended model and whether accidental issues intrude.

In so far as hardware is limited and can't do everything simultaneously, obviously most code is not running until circumstances and control flow call for it to be invoked, from wherever it was kept. (This is the tautological "like, duh" part common to Turing machines and Von Neumann architecture.) In part, it's mutation that creates new kinds of role, because an option to change creates a menu in patterns of what updates with respect to other things. Ability to change code injects part of the system model that enables such changes and keeps them meaningful when used.

In short, when you permit system effects to creep into the model describing a problem, you see system artifacts, just like the roofer sees hammers and nails when channeling a habitual process. I guess I just want to see more declarative tools about system effects integrated in my PL kit, as opposed to that angle being a forbidden POV due to concern partitioning. Where edges go in models should be a result of model design, not imposed by random stale convention. It seems like the more things you can declare, the more automated checking can occur. Maybe the hard thing is keeping results fluid instead of strangled in gridlock when conflicting declarations remove too many degrees of freedom.