## Dynamic Semantics

In a search of Ltu there are many occurrences of the expression "dynamic semantics" usually in discussions of static and dynamic typing. It is easy enough to imagine what this might mean but I wonder if there is an accepted formal definition in computer science similar to the definitions of axiomatic, denotation, and operational semantics. I found one suggestive paper "Representing Action and Change in Logic Programs" but this doesn't directly answer the question. Does anyone know of anything? Thanks.

## Comment viewing options

### If you open your copy

If you open your copy (wishful thinking... ;-) of The Definition of Standard ML (Revised), you can see that "Dynamic Semantics" refers to (roughly) the evaluation rules of the language while "Static Semantics" refers (roughly) to the typing rules of the language.

### I guess this

I guess this characterization wouldn't apply to dynamically-typed languages, which is what I suppose you're referring to with the "roughly" qualifier.

### It does

I guess this characterization wouldn't apply to dynamically-typed languages

That depends on whether you accept the term "dynamically typed" as technically accurate. Most programming language theorists don't.

And from that point of view the characterisation applies, because technically the respective languages do not have a type system and hence not much of a static semantics (if you neglect binding structure, which is often considered mere syntax).

### ... and why don't they?

That depends on whether you accept the term "dynamically typed" as technically accurate. Most programming language theorists don't.

And why don't they? The practical origins of type systems in programming were as a means to discriminate between integer and floating point representation. The classification of values into types during execution deserves a name and "dynamic typing" seems to be a perfectly descriptive term despite being dismissed by some theorists.

### From the perspective of the PL...

...the dynamic types don't exist - they are merely program logic wrapped in the abstraction facilities provided by the language. Or to use an analogy, I can write a program that models the flight characteristics of a hot air balloon. Yet, it would be a stretch to label most any PL as a "Hot Air Balloon" programming language. PL theorists are concerned with what exists from the perspective of the programming language.

### Whys and Wherefores

cdiggins: And why don't they? The practical origins of type systems in programming were as a means to discriminate between integer and floating point representation. The classification of values into types during execution deserves a name and "dynamic typing" seems to be a perfectly descriptive term despite being dismissed by some theorists.

First of all, virtually all theorists—and I only qualify it because I'm sure there's one out there that would prove me wrong if I didn't. But here's why "virtually all" is correct:

It's important to remember that Type Theory predates electronic computing—it arose in the context of mathematical logic to address some of the issues that arose, e.g. Russell's Paradox. So from inception, Type Theory has had the mandate of disallowing logical nonsense in the domain of discourse.

As I've written elsewhere but am too tired to look up just now, if you're going to have a phase distinction between compile-time and runtime, then it's helpful both to yourself and to your users to disallow as much logical nonsense in your code as possible—the more errors that can be caught at compile-time, the fewer will be left for your users to discover the hard way at runtime. So type systems have gotten increasingly rich, by which I mean increasingly specific about what operations can and cannot be performed on terms of a given type, as time has gone by. However, unless your program takes no input whatsoever, it's quite unlikely that you can do all computation at compile time, so the dynamic semantics of your language matter a great deal in practice.

In this context—that is, the context of a sharp phase distinction between compile-time and runtime—the concept of "type" that we've inherited from the mathematical logic community doesn't have a good analog at runtime. Well, actually, it does: there's the notion of a "universal type," which all values in a "dynamically typed" language are members of. Since there is only one such type in the language, no term can be considered to be distinguised from any other term by type, and so such languages are typically called "untyped." This isn't a slur; you'll find a great deal of literature on the "untyped lambda calculus," which was so-called by Alonzo Church himself, just as the "simply-typed lambda calculus" was also so-called by Church. Given this history and context, it's incoherent to talk about "dynamic types;" if they can't be proven correct at compile-time, they aren't types.

Now, of course, there are a number of issues around this:

• In an untyped language, the distinction among an operation on a term, e.g. throwing an exception, diverging, or exhibiting undefined behavior is not only real, but important.
• The sharp phase distinction between compile-time and runtime is getting increasingly blurry. Dependent types, abstract interpretation, multi-stage programming, partial evaluation, soft typing, gradual typing, and no doubt other technologies I'm unaware of all alter the relationship between compile-time and runtime.
• Great strides have been made in the programming community at large towards the realization that automated testing is an invaluable tool. In a great many domains, it's cost-effective to work in an untyped language, ensure that you write extensive automated tests, and accept that your tests won't provide 100% coverage and that it's impossible for tests to, e.g. demonstrate the lack of deadlock or race conditions in concurrent code. See The Unreasonable Effectiveness of Testing for more about this.
• Recently, some effort has been put into sharing analyses between compile-time and runtime even if the language in question doesn't explicitly facilitate such sharing; see DSD-Crasher for a specific example.
• Do we want English to be descriptive or proscriptive? If you lean towards proscriptivism and have a background in Type Theory, you'll probably never be inclined to accept "dynamic typing." If you lean towards descriptivism and/or don't have a background in Type Theory, you're more likely to conclude that "dynamic typing" is already a largely accepted term of art ("Terms like 'dynamically typed' are arguably misnomers and should probably be replaced by 'dynamically checked,' but the usage is standard." — Types and Programming Languages), and from there you can decide whether or not to dismiss the type theorists/proscriptivists as a bunch of cranks. :-)

That's probably enough from just off the top of my head. So I guess my point is that there's a lot of effort lately that seems to revolve around moving beyond the compile-time and runtime phase distinction, and that once we do that, inevitably we must revisit the question of types. Having said that, type theorists have history and extant successes on their side: it's not an accident that Scala and Objective Caml are among the only statically-typed languages not to suffer from the Expression Problem and also have two of the most carefully designed type systems in the world. Similarly, it's not an accident that preventing deadlock and race conditions at compile time involves a realization of the Pi Calculus as a type system. So the type theorist's point is basically that formalization matters—it's fiendishly difficult to get, e.g. higher-order polymorphism and subtyping right without proofs—and, in particular, it's like virginity; once you give it up, you can't get it back ("One can't proceed from the informal to the formal by formal means" — Alan Perlis).

The trick, then, is to keep talking about the increasingly-blurry phase distinction as formally as possible. The comments that several of us have made about "sliders" and partial evaluation and the Futamura Projections hint at a belief that it is possible to reconcile Type Theory and "dynamic types," but type theorists will reliably (and rightly, IMHO) reject anything that smacks of "I don't need no stinkin' proofs; I know what I mean." This is why I personally find Gradual Typing for Functional Languages so compelling. It's all about the last sentence in the abstract:

We prove that this calculus is type safe and that the cost of dynamism is â€œpay-as-you-goâ€.

Does this help at all? I really am interested in bridging this gap. I believe it can be bridged—indeed, that it is being bridged. But I realize that I still have some penance to do for a number of intemperate remarks that I've made here over the years.

### Hidden Algebra

Hidden Algebra

The hidden algebra approach takes as basic the notion of observational abstraction, or more precisely, behavioral satisfaction: this means that hidden specifications characterize how objects (and systems) behave in response to a given set of experiments, rather than how they are implemented; this provides a version of what is called a behavioral type in the object oriented jargon, but which we prefer to call a hidden theory, behavioral theory, or hidden specification. A hidden correctness proof for a refinement (or implementation) shows that one hidden theory behaviorally satisfies another, in the sense that any model of the second theory yields a model of the first; however, the formal proof is algebraic, not model-based. Successive refinement of designs is a useful model of software development, which avoids the unproductive "semantic swamp" of verifying actual code.

This is the third paragraph in the above link. My take on this is that hidden algebra is the "type specification" of the parts of a program. In order to "run" the program we must put the pieces together. The dynamic view focuses on the "answer" (ie the behavior) as how the pieces may be put together. In the dynamic view the specification is hidden, thus hidden algebra. If I am not mistaken, this is really just coalgebra vs algebra. I think I can visualize this but I don't really understand the math so I will leave at that.

### In this contextâ€”that is,

In this contextâ€”that is, the context of a sharp phase distinction between compile-time and runtimeâ€”the concept of "type" that we've inherited from the mathematical logic community doesn't have a good analog at runtime. Well, actually, it does: there's the notion of a "universal type," which all values in a "dynamically typed" language are members of. Since there is only one such type in the language, no term can be considered to be distinguised from any other term by type, and so such languages are typically called "untyped."

You can however apply the formalisms of type theory to the classification of values at runtime which is why I think the term of "dynamic typing" is reasonable. Consider a possible language which has only runtime types, while maintaining all of the sophistication of an ML type inference engine.

### Think higher-order

You can however apply the formalisms of type theory to the classification of values at runtime

For any interesting language (read: a higher-order one) you cannot without seriously blurring the distinction between static and dynamic. For example, to classify a value of function type in the way "type theory" does you have to type-check its body - which is not a value but an arbitrary expression/statement/program.

Consider a possible language which has only runtime types

You have not yet defined what a "runtime type" is. The point being that the tags used for dynamic checks in most "dynamically typed" languages do not resemble types in the standard theoretic sense. In theory sense they are little more than "syntax" of values.

Notwithstanding more involved notions of "runtime types", e.g. in Common Lisp, which are rather boolean procedures...

### Dynamic types are a secondary effect

You can however apply the formalisms of type theory to the classification of values at runtime which is why I think the term of "dynamic typing" is reasonable.

But it's not really the runtime "types" that are interesting. To reason successfully about a program, you have to know which operations can be applied at which terms (i.e. to which values), and you have to know that statically — even in a dynamically-checked language (see Why type systems are interesting and its followups).

IOW, reasoning about programs relies largely on static properties of programs. "Dynamic types" are mainly a way of ensuring that the static assumptions we've made aren't violated — in most languages, they aren't themselves "types" in any interesting sense, although there's some overlap at the level of atomic types such as strings, numbers etc. Focusing on these as though they are types is the tail wagging the dog: "dynamic types" are (mostly) a mechanism used to support our reasoning about static properties.

(And no, I haven't forgotten my geas imposed by Vesa.)

### Expression Problem

I wasn't familiar with this, so thought I'd add a note - it's a reference to a posting by Phil Wadler to the java-genericity mailing list on 12 Nov 1998. Expression Problem

### That's Right

That is indeed the origin of the term. Pursuant to my comment about O'Caml and Scala, the relevant papers are On the (un)reality of virtual types and Independently Extensible Solutions to the Expression Problem, respectively. The latter, in particular, includes the best summary of the problem and the various approaches taken to addressing it I've yet seen.

### I usually use the terms as

I usually use the terms as defined here. A clear example would be defining a variable. The static semantics del with the type, and name of the variable etc. The dynamic semantics deal with memory allocation and so on. As an example, check the Ada RM's description of object declarations, you will see the difference between the static and dynamic semantic definitions.

### Operational Semantics?

In these semi-formal definitions, I can't see the difference between Dynamic Semantics and Operational Semantics. Are these used synonymously?

### No. Operational semantics is

No. Operational semantics is one style of formally specifying dynamic semantics, but denotational semantics (indeed even axiomatic semantics!) can be used to describe the dynamic properties (e.g., the store, continuation etc.) of language constructs.

### Informal Definition

I use the term "dynamic semantics" to basically mean that aspect of what is computable, in the Turing-complete sense, that can't be evaluated at compile time in the language under discussion. That is, if language X's type system exists at some point on the continuum from being untyped to being Turing-complete:

<------------|------------------------------------------------------>
untyped      X                                        Turing-complete


Then everything to its "right" on that line constitutes the "dynamic semantics" of the language, assuming that the language itself is Turing-complete. In the example line above, "X" might be Standard ML or O'Caml. Move X to the left a bit and you're talking about Pascal or C. Move it to the right a bit and you're talking about GHC. Move it until it's about 3/4 of the line to the right, and you're talking about Epigram, Coq, Twelf, HOL... Move it all the way to the right and you're talking about Cayenne, Qi, C++ (!)...

Now, it should be obvious that this is an oversimplifcation along a couple of, er, dimensions. As that pun suggests, the continuum isn't a line; it's actually the Barendregt cube, because, of course, type systems have more than one feature and they're orthogonal to each other. Secondly, I haven't accounted for the ways in which the type system and dynamic semantics interact, as should be obvious from where C++ winds up on the spectrum: it's true (by accident) that C++'s type system is Turing-complete, but because it's not true by design, template metaprogramming in C++ is a royal pain in the neck, the interaction between template metaprogramming and plain ol' programming is a royal pain in the neck, and the only way to minimize the anguish is to ruthlessly leverage the anguish of others.

Still, I think even this oversimplified model is useful when talking about subjects such as multi-stage programming, partial evaluation, abstract interpretation, JIT compilation, expressive power, and so on.

Does this make sense?

### Not sure about the accuracy

I think your definition can be useful as source of intuitions, but I'm not sure of its accuracy.

The main problem I see is that actually, a lot more type systems are turing-complete than what you seem to believe. For example, as demonstrated recently, GHC's one is turing-complete (see an embedding of SK-calculus here: http://www.haskell.org//pipermail/haskell/2006-August/018355.html). I would expect OCaml type system to be somewhat turing-complete too, with the help of the module language (at least because some mappings between GHC type system and the module system of OCaml have been studied recently, so since the GHC type system is turing-complete, the existence of mapping to the OCaml module language would tend to indicate that the latter is turing-complete too, imho).

With a lot of "powerful" type systems being turing-complete, I think your line is becoming seriously overcrowded on the right, making it non-pertinent for the analysis of the systems that found themselves in the right extreme of the spectrum. There are vast differences between GHC and Qi (for example), and both type systems would appear at the same point on your line.

At the end, I think it boils down to the old truth that turing-completeness of something doesn't tell you much about the "expressiveness" of said thing, for most of the definitions of "expressiveness" people have in mind.

### Fair Enough

I think these are all good points, and your comment about aiding intuition strikes precisely the note that I was aiming for. C++ is another good example: its type system is Turing-complete, but leveraging that fact in day-to-day programming is virtually guaranteed to be more trouble than it's worth. Based on the SK-combinator-calculus embedding into GHC, I'd say essentially the same thing: interesting result. Not likely to affect Haskell programmers much. So perhaps instead of what the "actual" expressive power of the type systems used as examples are, we should rely instead on the less formal notion of "how people actually use the type systems in question," since an extremely expressive type system that's impenetrable to most programmers isn't going to impress anyone—and that serves as a sharp reminder of the importance of, e.g. Tim Sweeney's efforts to craft a language with a pure functional core, dependent types, lenient evaluation, etc. but with a Pascal/ML-inspired syntax that isn't "scary."

### To clarify

To be a bit more clear, I think your notion of "dynamic semantics" connected to different dimension of expressiveness is interesting, but that we need to define it wrt a more appropriate notion of expressiveness than turing-completeness. Of course, such an accurate and tractable notion of "expressiveness" would be the holy grail of programming language semantics, and I haven't anything approaching on stock yet :)

Along your idea of a spectrum of expressiveness, maybe an interesting way to quantify expressiveness would be to compare different type systems wrt a single language known to be more expressive (statically speaking) than the ones you want to compare. You build implementations (or embeddings/isomorphisms, for that matter) of the "reference" language in the different languages, and then you can place them on a line (or a cube) much like the one you described depending on the ratio between how much of the properties of the reference language could be expressed statically or had to be checked at runtime.

But maybe we're drifting too much on the "fuzzy-theoric" side for LtU :)

### Backing Up

Actually, my comment here is that I'm not trying to measure "expressive power," in the same sense that we use the phrase to talk about whole programming languages, in type systems; I really am trying to talk about what they can compute at compile time, vs. having to leave things to runtime, where presumably anything that's computable goes, again assuming a Turing-complete language. So I feel the need to backpedal a bit from where we are now.

On the other hand, I am also trying to talk about a continuum that exists in the power of type systems—again, distinctly, I think, from whole languages—and that continuum already has a pretty compelling metaphor, IMHO, in the Barendregt cube. So perhaps I should simply have said "my definition of the dynamic semantics of a language is everything that's up and/or to the right of where the language's type system finds itself on the surface of the Barendregt cube," and not made the poor attempt to identify where some languages sit on the line that I made up, which was already quite artificial. Again, all I was trying to do was help readers form an intuition, perhaps with a visual aid of some kind. :-)

### GHC's type system is only

GHC's type system is only turing complete with certain extensions turned on, it doesn't really make sense to just talk about GHC's type system unless you really want implicit parameters with everything ('scuse me while I go vomit).

### I'm not sure to follow you.

I'm not sure to follow you. Implicit parameters aren't an extension of the type system, and the SK-calculus embedding we're talking of doesn't use all of the extensions available in GHC, only fundeps, undecidable instances and multi-parameters type classes.

### I could be wrong, I was

I could be wrong, I was under the impression implicit parameters showed up in a function's type (or was that just linear implicit parameters)?

Undecidable instances is more or less the "make it turing complete" switch in the presence of multiparm type classes, too. It's one that people struggle to avoid needing - and thus not one that's in most people's day-to-day use even for the sake of using an existing library. Notably, it's not enabled by -fglasgow-exts.

### Support is definitely needed

Support is definitely needed at the type level for implicit parameters, but what I meant is that I don't think they add any expressive power to the type system (you got no way to "create" them at the type-level, you can only "lift" them from values), so they're not relevant on the issue of turing-completeness/expressiveness, imho.

Sorry for the confusion :)

### Oh yes, and I forget whether

Oh yes, and I forget whether the mapping as defined assumed the availability of undecidable instances - most flavours of ML module system aren't turing complete without extensions that aren't needed to translate type classes, the undecidability with type classes comes from choosing instances rather than building them.

### Paraconsistent

This interesting word: Paraconsistent came up in another thread. The subject is closely related to many issues discussed on Ltu such as Godel's theorem and formal semantics in general. It is also the implicit issue in the paper I linked at the top. An implied conclusion is that "situations" are always essential. Briefly what is the meaning of (A & (not A)) implies B? Non-termination?