archives

Simulation language ideas?

Hi,

I am piecing together a scripting language for gaming applications(something that will help in adding details on top of a "core" existing game structure: ways to add tutorial messages, particle effects, tween HUD elements in and out...) and keep coming back to a particular model that I've seen in a few game scripting environments over the years: Bringing an explicit timeline into the language, so that lines of code represent events in a frame of simulation, rather than a computation that can happen at "any" time.

The advantages of this model are easy to see: rather than maintaining the bookkeeping of a tiresome bundle of timers and event callbacks, the language becomes a descriptive storyboard for various actions. Both one-shots and tweened parameters(fades, bounces, etc.) can be defined beautifully in this way.

What I'm struggling with - and have been struggling with for the past few days - isn't really this aspect of it, but how to implement it in a beautiful, powerful way within a game engine. Crude approaches that only let the language trigger externally coded events are limiting; I considered an approach with a set of named "channels" that could aid in grouping, starting, and stopping timeline scripts in a concurrent fashion, which added some reflective behavior, but this gradually led me towards bolting on a computational type system(including numbers, strings, lists, hashes) that could encompass the capabilities of the channel groups.

In trying to add a type system, memory management reared its head; e.g., when one timeline instance ends, how should I destroy the variables it was using - given that there could be circular references? Garbage collection then comes into play as a possibility, and by this point I was getting way too complex to feel comfortable with the implementation. Ideally it should be possible to serialize the whole game state and recall it at any time; with such a large system the task becomes dire.

The approach I'm starting on now returns to looking what the game engine is already doing for a domain model. It's an incomplete thought but I figured it might make good discussion:

The state of the art currently is to use relatively lightweight entities and hook components to them. The components can "live" primarily in their own domain and exploit custom data structures, memory coherency, etc. The entity, meanwhile, only has to know which components/component instances it has. (in my own engine, I apply an id value that lives on both the components and the entity, hence lookups can be done from anything to anything on the same entity.)

My current thought is to start building the language around access to the entity and component model, making the timeline scripts another kind of component, one where awareness of the other components is built in, and the access methods to traverse the graph of world state(looking up collision boundaries, searching indexes of actors or objects, etc.) are available, but computation in the traditional sense is limited. But I draw a blank once I start thinking of the details of how this system might work. My intuition says that computation is a secondary need since the host language can do it, but I find it really hard to break away and get new ideas.

An intuitionistic logic that proves Markov's principle

“An intuitionistic logic that proves Markov's principle”, Hugo Herbelin, LICS 2010.

We design an intuitionistic predicate logic that supports a limited amount of classical reasoning, just enough to prove a variant of Markov’s principle suited for predicate logic.

At the computational level, the extraction of an existential witness out of a proof of its double negation is done by using a form of statically-bound exception mechanism, what can be seen as a direct-style variant of Friedman’s A-translation.

Markov's principle is an axiom of "Russian constructivism". It says that if P is a decidable predicate (i.e., the formula ∀x. P(x) or ¬P(x) is constructively provable), then if we know that P is not always false (ie ¬(∀x. ¬P(x))), then there exists an x such that P holds (ie ∃x. P(x)).

One operational way of understanding this axiom is that it says is that if we know P is not always false (on, say, the natural numbers), we can find an element for which it is true with a while-loop: check to see if 1 is true, and then check to see if 2 is true, and so on, stopping when you hit a number for which it holds. This means that Markov's principle is a principle of unbounded search, and as a result it has controversial status in intuitionistic mathematics -- should we regard a program with a non-constructive termination proof as a constructively terminating program? (The answer is, as usual, "it depends": in different applications you can want it or not.)

However, just chucking an axiom like ¬(∀x. ¬P(x)) → ∃x. P(x) into your proof rules is not very nice from a proof-theoretic perspective. It mixes up a bunch of different logical connectives, and adding it as an axiom will break things like the cut-elimination theorem for sequent calculus.

In this paper, Herbelin exploits (a) the fact that Markov's principle is a (mildly) classical principle of reasoning, and (b) the fact that classical logic is connected with control operators like continuations, to give a logic which is proof-theoretically well-behaved, but which supports Markov's principle. The trick is to add first-class continuations, but only for value types (ie, you can throw products and sums, but not functions).

What I find surprising is that this same class of restriction also arises in another paper on control operators at LICS -- Noam Zeilberger's paper "Polarity and the Logic of Delimited Continuations" uses it too. (This paper deserves a post here too.)