Beyond FRP: physics-inspired programming abstractions?

I've been doing a lot of physics and graphics programming lately, and it seems like many physics concepts might make good programming language abstractions. This idea is sort of in the same spirit of how functional-reactive programming makes time-varying values direct abstractions in the programming language.

One such abstraction could be force. Basically, force acts to change a stateful value over time. In this way, a force is very different from a signal, which more or less specifies a transformation of a stateless value according to time. Although we can fake physical effects such as bouncing using Penner Easing Equations (e.g., BounceEaseOut), real simulation of physical processes requires that we basically abandon signals and use forces instead (possibly through a physics engine). In WPF, this means rather than databind a property, we update periodically in the render loop.

Compared to signals, the nice thing about a force is that it they are composable: the effects of multiple forces can be combined, while a value can't really be bound to multiple signals. E.g., a body's velocity (and ultimately its position) could be affected by gravity and one or more spring forces. Such forces could possibly be extended to other non-physical domains; e.g., market price modeling (just as many physicists use their skills on Wall Street). Collision detection and resolution techniques could be a nice way of dealing more efficiently with detecting the many kinds of events that can occur in a complicated system.

A physics-inspired programming language could entail a drastically different object organization that occurs currently. Possibly, objects would not interact unless some physical connection between them (collision) possibly provided by the environment (e.g., via diffusion ala Repenning's anti-objects). The implementation of a such a language would forgo a traditional interpreter for a physics engine, which could then be accelerated by the GPU using CUDA or through parallelization across multiple cores.

Any thoughts? Crazy idea, plausible PL research direction, or leave this kind of thing to the graphics community?

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

I don't get it...

I looked at Penner's Easing Equations, and I'm not sure why you are giving him so much credit. His book is likely useful for animators using Actionscript, and possibly GUI developers, but certainly not for anybody doing serious physics simulations. Take his bouncing code for example:

class com.robertpenner.easing.Bounce {
	static function easeOut (t:Number, b:Number, c:Number, d:Number):Number {
		if ((t/=d) < (1/2.75)) {
			return c*(7.5625*t*t) + b;
		} else if (t < (2/2.75)) {
			return c*(7.5625*(t-=(1.5/2.75))*t + .75) + b;
		} else if (t < (2.5/2.75)) {
			return c*(7.5625*(t-=(2.25/2.75))*t + .9375) + b;
		} else {
			return c*(7.5625*(t-=(2.625/2.75))*t + .984375) + b;
       /*snipped for brevity*/

This obviously isn't physically accurate, except possibly in very special cases when you choose the "correct" b, c, and d, the fourth bounce is too small to be relevant to the ongoing simulation, you don't call this function with ever increasing values of t, and whatever other convoluted assumptions you need to get this to work.

As for signals versus state, I don't see what you are getting at. Say you have brownian motion simulator with n particles, something that also involves collisions. Unlike the "bouncing" example above, I don't know of any efficient way to calculate the state of a system as a function of time given initial conditions, and would be surprised that such an algorithm exists. However, given the state of a system at time t, I can certainly calculate the state of the system at time t + d with reasonable efficiency. (for an appropriately small value of d, and assuming there isn't significant inter-particle forces at play causing time complexities similar to the n-body problem.)

Signals can be modelled using streams, and corecursive stream equations are a powerful technique for dealing with state. For example, corecursive streams are rather attractive way of modelling digital circuits with delayed feedback, i.e. state. I haven't done a brownian motion simulator based on this idea, though I'm sure I could.

I should have qualified this

I should have qualified this that I'm primarily interested in UI rather than in actual physical simulations. In UI, Penner's equations are used to simulate physical effects although no physical simulation is being performed. Also, a penner equation can easily be used to transform a signal, e.g., = bounceEaseOut(clock) * extent

So that the rectangle appears to bounce as clock goes from 0 to 1. If you want a better bounce effect in your UI, then the obvious path is through true physical simulation.

I've never dealt with co-recursive stream equations. The signal systems that I have played with are asynchronous and live in imperative environments (Scheme, Java, .NET) where signals definitely cannot be modeled as streams. Granted, this is the other world of more formally loose form of FRP that focuses on efficiency over purity.

Functional reactive systems theory

I have to admit that I don't have much experience with FRP, so perhaps I'm missing something here. But I don't really see why forces cannot be accounted for as signals in the usual FRP model. The FRP model, as far as I'm aware, draws heavily on classical systems theory: signals are functions from time to some set of values, systems transform signals to signals (input to output). Memoryless systems produce an output based only on the present input. Systems with memory (stateful systems) produce an output that depends on previously received inputs as well as the current input. What's different about forces, as you've described them, is not that they can't be treated as signals, but that they (usually) act on systems that have memory. The position and velocity of a body depend both on the current force input, and on the state of the body (which is the result of previous force inputs). So simulation of physical processes doesn't require abandoning signals, it just requires stateful systems to transform the signals.

A simple example of a stateful system in an FRP context appears in Wan and Hudak's FRP from 1st Principles, on page 5>

  > integral :: Behavior Real -> Behavior Real
  > integral fb = 
  >   \ts@(t:ts') -> 0 : loop t 0 ts' (fb ts)
  >       where loop t0 acc (t1:ts) (a:as)
  >           = let acc' = acc + (t1-t0)*a
  >             in acc' : loop t1 acc' evs ts as

If the input signal was a time varying force, the output signal from the integrator would be the momentum (mass x velocity) of the body represented by the integrator system.

True continuous time varying

True continuous time varying values don't seem to work well performance-wise, judging by the Haskell work switching focus to event based models. Once you go to step-wise functions, true physics goes away. I didn't really understand how they could achieve true continuous values without theorem proving in the first place (ex: pulling on integral(sin(t)) isn't enough), and that's important (more realistic ex: an event that occurs when a moving point crosses a line - sampling might skip it). Losing integrals and derivatives would hurt for writing complex animation and layout. OTOH, using the same concepts of time-varying values and limiting the operations or primitive values might work.

I'm trying to figure out how to make layout & typical GUI animations parallel this summer, and have been thinking along these lines (in particular, some combination of linear constraints, physics, and data flow or frp). Sounds like you're going along similar lines :) One recurring thought has been on where the precision is really necessary in all of this.

Somewhat related is the prefuse/flare libraries, which seem to be some of the best for creating charts & graphs by using force-directed layout. This may be important for many UIs going forward, esp. for animation, but as I mentioned, mixing it with more typical constraints seems key to making it more generally applicable. Perhaps I'll finally do something useful ;-)

Simply efficient functional

Simply efficient functional reactivity claims to provide efficient continuous values.

that was what I was

that was what I was referring to: it introduces reactive values, which move away from them (push based), afaict

confusion about reactive values and continuous time

Continuous time is still very important to me, for natural, simple, and composable modeling. The "simply" paper doesn't move away from continuous time. Rather, it shows how to decompose the specification of continuous reactive behaviors into two simpler notions: reactive values and time functions, with no loss of expressiveness.

A few of us are working on the Reactive library currently. It will be in good shape by the end of the summer.

Re: summer

that is pretty exciting; i hope i can get time to try to get my head around it all. many thanks for the efforts.

Coupled states ..

I find FrTime and FRP in general a bit difficult to think with when it comes to coupled states - which feature in many systems where the physics is of interest. For example, consider two masses connected by a spring. The (x,p) states of either mass is coupled to the other using the spring's force function.

For such systems, the update cycle goes something like this -
1. Compute forces using current (x,p)
2. Apply force to each p
3. Apply p to each x.
4. Step time
* Repeat.
(2&3 can be combined)

How would you express this naturally using FRP?

your example if expanded out

your example if expanded out is just a set of differential equations, which can be expressed almost as it is in FRP. I don't see a problem here at all. In fact an iterative representation of the same algorithm (like you stated) is less abstract.

can you give an example ..

Is is interesting that you see otherwise. Will it be possible for you to give an example - say in FrTime, using two masses coupled by a spring in 1 dimension? .. with spring constant k.

.. for which the hamiltonian is p1^2/2m1 + p2^2/2m2 + 1/2 k * (x2-x1)^2.

.. and my intention is to draw the two masses.

It's to solve the following

It's to solve the following set of equations:

x1 = x10 + integral v1
x2 = x20 + integral v2
v1 = v10 + integral a1
v2 = v20 + integral a2
a1 = F / m1
a2 = -F / m2
F = -k * ((x2 - x1) - l)

where x10, x20, v10, v20 are initial values, and l is the equibrium length of the spring.

I'm not familar with FrTime, so I'm not sure if it can directly express recursive definitions. But in original FRP and Yampa, you can write a program almost identical to the above equations.

thanks ..

.. and Yampa looks neat.

Numerical errors aside, the

Numerical errors aside, the precision problem can be overcome by taking a prediction approach rather than sampling. Generating a (time-ordered) queue of future events would guarrantee none will be missed. It's an implementation issue.

A physics engine used in games and simulations pretty much does it too. To find out whether something has happened between two frames, it calculates exact time an event takes place, and compensates in the next frame if the exact moment has been missed.

True continuous time varying

True continuous time varying values don't seem to work well performance-wise

My intuition is that we could support continuous time-varying values efficiently and effectively if we used symbolic representations for the continuous parts - i.e. with piecewise discrete switching between polynomial or trigonmetric polynomial signals. Either can be represented with a simple vector of coefficients. And piecwise integrals, internal derivatives, etc. on these signals could be exact and efficient. (Derivatives at the piecewise edges might be less precise.)

The cost is that we'd need to find good continuous functions to represent the signals, which could make adapter code more 'interesting' (e.g. integrating microphone, mouse position, requires a Fourier transform or its inverse, etc.). Also, we'd need to deal with a lot of time-shifting of signals when adding or subtracting them. (I'm not sure whether the math for that is easy or hard.)

OTOH, if we can find zero-crossings and collisions more precisely (even if it's within the approximation) this may be worth the effort.

Conservative Logic

Conservative Logic

Conservative logic is a comprehensive model of computation which explicitly reflects a number of
fundamental principles of physics, such as the reversibility of the dynamical laws and the
conservation of certain additive quantities (among which energy plays a distinguished role). Because
it more closely mirrors physics than traditional models of computation, conservative logic is in a
better position to provide indications concerning the realization of high-performance computing
systems, i.e., of systems that make very efficient use of the "computing resources" actually offered by
nature. In particular, conservative logic shows that it is ideally possible to build sequential circuits
with zero internal power dissipation. After establishing a general framework, we discuss two specific
models of computation. The first uses binary variables and is the conservative-logic counterpart of
switching theory; this model proves that universal computing capabilities are compatible with the
reversibility and conservation constraints. The second model, which is a refinement of the first,
constitutes a substantial breakthrough in establishing a correspondence between computation and
physics. In fact, this model is based on elastic collisions of identical "balls," and thus is formally
identical with the atomic model that underlies the (classical) kinetic theory of perfect gases. Quite
literally, the functional behavior of a general-purpose digital computer can be reproduced by a perfect
gas placed in a suitably shaped container and given appropriate initial conditions

This seriously deserves its

This seriously deserves its own topic, maybe even a front-page mention.

I agree

I am reading it, and I agree, this deserves a front-page mention. I am still a bit baffled by the content, but it looks genuine.

Maybe a better versed

Maybe a better versed physicist could chime in, but reversibility is some kind of holy grail for programming language. Not only could you theoretically save energy, but imagine the debuggers you could play with if you could backwards in any context.

Very theoretical though. Its still more like physics as a computer vs. using physics in computing.

Baffles me too

Which is exactly why I would like to see it discussed in a separate thread. I am, for one, totally unconvinced by the whole reversibility approach, or woefully underappreciative of its possible merits.

To me, it seems irreversibility is something you'ld expect, or sometimes even depend upon, in a computational model. I expect not a lot of people would like all their multiplications -or exclusive or's- to be reversible, certainly not from a security (algorithms) perspective.

Just because you could

Just because you could reverse a computation doesn't mean that you need to do so very quickly. We could talk about it in another thread, but this might be off topic for LtU (we've gone down the quantum path before...).

Make it a forum post

Probably the best thing to do here is for somebody to write it up as a forum post, and then Anton or Ehud can promote it to the front page.

I am interested in finding out if Charles Stewart has anything to say about this...

[Admin] Promoted to the front page

The people have spoken and I am a man of the people. You may find a front page post here.

Good on you.

Good on you.

forces via FRP

Force is indeed a nice simple abstraction and one that is very easy to code up as in FRP. Just use a bit of arithmetic and a couple of integrals.

With collision detection and

With collision detection and impulse-based collision response? I don't really see how that could work, I'm not sure how integrals could possibly work in say a system with 100 circles bouncing off the wall and each other. Once we have simulate collisions as impulses, I think you'd have to get rid of integrals. The alternative would be modeling the collision precisely, which would be too expensive for real time use.

Ha, nice that you mentioned

Ha, nice that you mentioned it. I've exactly the same example application of 100 balls bouncing off wall and each other written in Yampa-like FRP. Indeed I'm putting together a serie of samples to be a tutorial for FRP, so I'm looking for ideas of such suitable applications.

And yes, I modelled collisions precisely, and I don't see any problem to do it in real time. There is still much room to improve in terms of algorithms, as I'm not exactly using a queue for future events. A nice optimization would be to use a priority queue.

future events?

How would the "queue for future events" interact with real time input?

To extend the example application, let's say there are 100 balls bouncing off walls and each other and an object controlled by a human or an external system. How does this fit in your model?

A queue for future events is

A queue for future events is merely a prediction, which can be modified accordingly if real-time input is also considered.

E.g., if a person only controls the force (acceleration), the movement of the object can still be predicted (either assuming no more input, or same input from the person), and hence future collision is calculated. Whenever the input changes, a new prediction is made to replace the old one.

If the location also comes from real-time input, then it's really a discrete signal. In this case, what do we mean by two objects collide? It may require a different definition.

In a physics engine,

In a physics engine, collisions are detected via a conservative broad phase and a more accurate narrow phase. A broad phase can involve swept shapes to predict if bodies could have or will collide during the time period that the physics engine is skipping over.

Cool. I'm looking forward to

Cool. I'm looking forward to reading the brief! There is a sort of big disconnect between the more pure arrow-based FRP of Yampa and the more lightweight FRP-like systems I'm more familiar with (SuperGlue, FrTime). Its definitely possible that the pure FRP systems are more expressive and can represent this. In an lightweight FRP system, I mostly have to fall back on state and leave the language.

I've released a new version

I've released a new version of Bling with a more complete dynamic simulation component (no collision detection yet), Windows, Visual Studio 2008 (express is ok), and .NET 3.5 SP1 required. More importantly, I've wrote up a small tutorial that is included in the distribution. A video of the tutorial project is located here.

distributed message passing concurrency

I wonder how languages like erlang (or termite scheme/gambit) with message passing concurrency work for this sort of application? Scaled on a cluster with a fast backbone link, the model may allow for a much higher amount of a total accessable power, but I guess the devil's in the details of how far it can theoretically be parallelized. Anybody got any ideas about this?

Message passing is simply

Message passing is simply not in the near future of high-performance concurrency, nor do I think it will ever get there. Today, its all about vectorization and SIMD ala pixel shaders and whatever else the GPU provides, and we are only going deeper down that path because it works.

Actor and message passing systems have advantages, but performance isn't one of them.

The idea of leveraging

The idea of leveraging distributed compute resources is reasonable to the extent the program is embarrassingly parallel. But I doubt it would be fun to express the various relationship issues of a large physics model in terms of message-passing. A publish/subscribe model would likely be a better basis.

Trackers, Ableton Live and other strange music hybrids, oh my!

If you are involved in electronic music at all, you may have heard of a tool called Ableton Live. Live is an incredibly sophisticated tool for realtime composition--think about the immediate usability of a SLIME repl in emacs vs the normal edit-compile-test-debug cycle. Composing music in Live as opposed to something like Logic really stressed the the importance of how in my compositional process, my yields for creative energy were usually constrained by how naturally I could integrate the functions of recording, processing and selective playback into my functional memory.

Another example of this sort of process (that I have a sneaking suspicion some of you old hands may be familiar with) can be found in tracker software: for example, renoise. Just like how emacs + slime (and similar paradigms) turned writing code from a dead edit-compile-cycle into a living read-eval-print-loop, so too did I find trackers like Renoise change the compositional process from a dead cycle into a living system. The interfaces and function models for Live and Renoise (and related ilk) allow for a very intuitive usage of the computer as an instrument for writing music. However, these functional models that allow a computer to be functionally used like a live instrument are extremely interesting from a theoretical standpoint on interfaces.

What if all my programs were as immediate and soul-soothing as programs that I now proficiently use, like traktor, live and renoise (all programs that can be used fluidly together)? I think that as I continue to learn *nix tools and beautiful, strange and powerful programming languages, I will perhaps learn how to impart to them that sublime silky smooth flow of thought that these applications provide and which I absolutely need, mentally. However, these compositional tools still often contain very useful and valuable gems of interface functionality improvements. Take a look at them. Play with them, even (we ARE scientists here)--you'll find them quite addictive, which to me is a very useful way to quantify interface usability. The most fun user interfaces end up becoming the most usable.

SPAM or just off topic?

SPAM or just off topic?


I don't think 303 heaven belongs on LtU.

Not even that far off-topic

We had a conversation in #ltu about sequencing and tracking recently, which started from a language design POV (being IRC it wandered, of course).

If you squint, it's possible to see early trackers as equivalent to a limited VLIW-style asm language + one simple form of abstraction provided by patterns and the pattern sequence. They even have goto, though no conditionals. There are many cases where more involved abstractions could/would be useful for expressing common structure through a piece while appropriately modulating it (and then modulating the modulation, of course). That goes double when you're rendering via a powerful synth plugin rather than Paula or (to a lesser extent) the SID.

Yet there's a significant tension here. Users want to be able to bash out a bunch of low-level stuff as it hits them, potentially in the space of seconds, and start stringing it together just as quickly. Even if you're not looking to actively support livecoding, the desired workflow bears a distinct resemblance to it - I won't be the only tracker who's had to explain to someone why the same 4 bar loop's been playing for the last half hour.

So we're looking not just at the conceptual language of the finished product, but also that of the UI used to manipulate it. The problem isn't just how to express, say, an arpeggiator - it's how to make it callable. What to do about 'patterns' that play on more than one track or instrument, but not all of them - and how to support deviation from them. How to avoid cut'n'paste programming without cluttering everything up, and how to provide suitably rapid navigation within this program. That last issue also affects languages like Subtext.

To put it another way: why does an experienced Haskeller, who knows about projects like Haskore, instead work in an app that requires manually entering effects codes in hexadecimal?

A more concrete post?

It would be on-topic if it would be concrete. I know the addictive effects of SW-synths and MIDI sequencers, so sure, I have a warm heart on this topic. But maybe the poster wants to make it concrete?

Starting points

There's a few things in that post that're reasonable starting points for discussion, especially given that it's a link that hasn't quite been directly pointed out before. It's better treated as a topic opener than chided for topicality!

Things that were mentioned or implied include:

  • 'Programming' to generate music (or implicitly, other media)
  • REPLs, with allusion to flow (and thus, psychology and cognitive models) and contrast to more typical edit-compile-test-debug approaches
  • Languages for liveprogramming, and how usability requirements differ ("how naturally I could integrate the functions...into my functional memory")
  • Comparison with shells

An important idea worth mentioning here is that UIs are about the language in which a conversation takes place between user and programming. The relationship between UI and data model and the (apparent) shape of the data model can both be important, as anyone who prefers LaTeX to Word can attest.

To what extent should UI requirements be allowed to affect the data model, and how do we handle the boundaries? Might we be able to apply existing research to help with the problem? Can we apply any of what we've learned about cross-cutting concerns to make it quicker to navigate between, say, sequencer data and the audio channels (and corresponding effects chains) the sequences are being played through? Can we bring effective abstraction to sequencing while retaining the UI throughput of a tracker, and without requiring the user to spend ages wiring controller data? Might an appropriate solution there even come from as unlikely a place as proof assistants?

Feel free to ask for explanations if I'm not fully making sense!

Been there done that.

On live programming been there done that. As for the live coding scene...seems more like art than science to me. In that case, thinking about it from an engineering point of view isn't useful, its more like an instrument, a really different beast.

Subjective experience with live programming processes

You are very correct in saying that it's more like an instrument (which is really quite a different beast), but that was my point. Something like FM synthesis would be an absolute dream in Haskell. I bought a DX7 a while back, and I can interact with it both like a musical instrument and like a scientific instrument. It lets me explore this curious combination of being both a listener and a maker of music: this higher order functionality is extremely common in music, and in a theoretically concrete way (for example, schenkerian analysis, counterpoint, tonal analysis) that is also very concretely observable (it is audible). You want to talk about crazy physics-inspired programming abstractions? Get your hands on a synthesizer. Look on google for the free Oatmeal VST instrument or the free Xhip instrument if you're on windows, or ZynAddSubFx if you're running a suitable *nix. Play with and really try to internalize the structure of the interface.

It might seem a little pointless at first, but all of these are good examples of powerful interfaces from a music maker's point of view, however basic they might look to you. The synthesizer you use isn't really that important (they've all got their own characteristic flavor, something that has been refining with every revision the synthesizer has gone through), and with renoise and other trackers, you can use your keyboard as an effective interface for creating music (as renoise can host VST instruments). The concept is very simple, but it allows me continue analyzing macrostructures even when I'm out of the classroom, or away from the computer. By making connections like these, we can make functional programming far more accessible, especially with something like physics and music (are both so immediate, beautiful and yet harmonious).

As a really good example of the sort of abstractions that would be cool to build, I'd like to propose Raymond Smullyan's perennial classic of To Mock a Mockingbird (though I admit I've only scratched the surface of all the stuff that has) as an example of the sort of incredible stuff we can accomplish: things that are as much a joy to read as Alice in Wonderland, but which teach still foster the development of abstract thought processes (in Smullyan's example, it's lambda calculus). When I thought about combinator birds, one of the first things I thought of was Pokémon--they're both animals that have access to interesting functions, and they can be composed together. When I think of it that way, I can use all the seemingly dry stuff I'm learning about with all the time I've logged into playing Pokémon--not just on my GB, but only! Google Shoddy Battle (it's an open source pokémon battle simulator) and read up the (yes, I'm not make it up) competitive battle analyses. There's a whole ecosystem here of teams of actors competing against other teams, battling with certain attributes combined with certain actions. It's a wonderland for someone who's hooked on FP, and you guys would get a kick out of it--that is, those of you who haven't gotten too cool for pokémon yet!

I apologize for labeling

I apologize for labeling your last post as SPAM, but its very difficult for me to parse your post. You should try to organize your ideas into more concrete, focused chunks. I suggest you digest all the work on music PL/live coding before you add physics to the mix. Also, if you are good at formalism, Conal Elliott's work ( is the closest you can get to making formal programming artistic.

I've also heard of programming language ideas based on the collectible card game paradigm (Pokemon belongs to this, but it started with Magic the Gathering). Basically, you take a camera-based tabletop computer (e.g., surface) and a deck of cards that the table can recognize. You express your program by combining cards from your deck on the table, much as you would in a MTG or Pokemon game. I don't think anyone has implemented it yet, but its definitely been thought of.

Anyways, a bit off topic for this physics + PL thread, but thanks for contributing :)

I particularly like this post.

I pretty much agree with everything you have said here!

Functional Reactive Physics

It seems to me that FRP + Physics is possible by representing 'objects' as FRP behaviors, and by supporting collections of these objects in a framework or physics engine. In practice, this would be structured very similar to the physics engines I've used (just chipmunk, really). It's just, everything would be defined using signals instead of method calls. Described in another thread.