Semantics: Logic vs. PL

The famous three approaches to programming language semantics are axiomatic, operational and denotational semantics. The famous two approaches to logic semantics are proof theory and model theory.

I find denotational semantics similar to model theory, and operational semantics similar to proof theory. However, I find no analogue to axiomatic semantics, and I wonder if there is one, and if not, why?

In model theory, one takes an abstract logical language and defines its meaning within an existing theory: classical logic is interpreted in set theory (models are simple sets), intuitionistic logic in Kripke Scenarios (or Heyting Algebras), etc. In essence, we define the meaning of the new language in terms of existing, familiar languages. This is no other than denotational semantics, where our programs denote structures in existing, well-defined structures.

In proof theory, we associate no inner meaning with the language. Rather, the meaning emerges out of the symbol pushing that is the proofs. Operational semantics operates along the same lines, mechanically manipulating program text, and the meaning emerges out of the textual juggling. Moreover, many modern operational semantics are actually given as a proof system, the theorems being the valid executions of the language.

However, I do not recognise a similar analogy for axiomatic semantics. Am I missing something? Or is it simply because we do not want to explain one logic with another? Or perhaps the power of axiomatic semantics is the augmentation of the computable language with uncomputable/high complexity constructs, something that does not always interest logicians?

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Proof theory and PL semantics

It is true that for highly declarative languages, that proof theory can be regarded as close to operational semantics. In particular:

  • Uniform provability does allow proof theory to be regarded as an operational semantics for some logic programming languages; cf. the Uniform Proofs as a Foundation for Logic Programming story. Prolog's operational semantics are messy to deal with proof-theoretically; perhaps surprisingly is is easier to model Prolog's denotational semantics in proof theory;
  • The formulae-as-types correspondence allows a pleasing relationship to pure functional programming languages, but because of intricacies of evaluation models, casting the operational semantics in proof-theoretic terms is quite hairy. In fact, the formulae-as-types correspondence can equally be read as providing a basis for denotational or algebraic (read axiomatic) semantics. In particular, cut-elimination on the proof-theoretic side should provide most of the ingredients of a Galois connection in an order-theoretic denotational semantics, or equally for an adjunction that can be used in an algebraic axiomatic semantics;

Using proof theory as a basis for models of non-declarative PLs is really hard work. There's been a lot of interest in using the proof theory of substructural logics to model imperative features, but the results are not highly applicable yet. But there is progress.

There is an important similarity between denotational semantics and model theory, because both are based on Tarski's T-schema. But neither the approaches to logical nor PL semantics are as orthogonal as you assume.

Operational Proofs

I think you're missing what the comparison is saying. The view of operational semantics as proof-theoretic is not based on the operational semantics of some language corresponding to the proof theory of some logic. If we consider the language *as* the logic, then giving that language an operational semantics just *is* giving it a proof theory, with the judgement that e -> e'. This works equally well for functional and imperative languages.

Similarly, denotational semantics is model-theoretic in the sense that it gives 'meaning' to the terms in our logic by translating them to a different mathematical structure, which is what model theory is.

Of course, we can give both denotational and operational semantics to the same terms, just as we can give a logic both a model theory and a proof theory.

Brevity is a virtue...

Thanks for that.

Closing tag

Let's close that CODE element... Better?

Proof theory: an inside or outside view? *

If I understood correctly:

  • You referred to cases in which operational and denotational semantics are given in terms of a proof theory for some logic.
  • Some languages' operational semantics is hard to express in proof theoretic terms (or hard to express at all?).

This indeed opposes the claim that operational semantics is proof theory. I am not sure about the rest of your reply, especially Tarski's T-schema, but it seems to follow the same line.

However, I did not mean to concentrate on the usage of proof theory for presenting operational semantics. Rather, I was aiming at the similarity in spirit (?) of the approaches. Thus, the analogy remains standing. In denotational semantics we translate the language into a target language, while in operational semantics the crunching of the meta-language explains the meaning.

With logic programming, the syntax draws heavily from logical formulae, and even the informal semantics draws heavily from proof theory. Hence it is not utterly surprising, yet still very impressive, that the proof theory is the operational semantics.

Admittedly, I do not have experience with denotational semantics for logic programming. But it seems that even in this case, the model theory is used as the mathematical structure we translate into (object language), and not the mathematical language we use to give the semantics (meta language).

Reversing the situation, in pure functional languages, e.g. Haskell, the language itself draws heavily on concepts from denotational semantics. Thus you have, within the language, concepts taken directly from the denotational vocabulary: functor, monad, etc. In this light, it also seems reasonable that the operational semantics would be hairy, since it has to simulate the (complex?) model within the syntactic vocabulary.

But I was talking about a more abstract (vague?) similarity. With denotational semantics we have a source language, target language and meta language. With operational semantics, we make do without the target language. The semantics are the churning of the meta-language, which simulate execution.

With this in mind, the distinction between model theory and proof theory is the same, with the first having a specified target language (a class of models) and the latter not.

But axiomatic semantics does not seem to fit this schema. Is the schema flawed? Is there something inherent in the axiomatic approach that do not fit with the schema? Or does the analogy continue to a third approach to logical semantics?

A Brief History of Time

However, I find no analogue to axiomatic semantics, and I wonder if there is one, and if not, why?

I'll give you my take on the distinction between the three kinds of semantics that might illuminate the matter.

I tend to see the difference between the three as being about how time is presented.

  • Denotational semantics (and model theory and most mathematics) is essentially timeless and stateless. You have some structure that is just there serving as the meaning (even if in practice you only discover it a bit at a time).
  • Operation semantics and proof theory have an implicit notion of time that tends to be monotonically stateful: information gets added step by step to the system as it evolves over time to its "completion".
  • Axiomatic semantics goes a step further to a fully stateful and less global notion of time, focusing instead on one step. Essentially it says "I don't know what happened in the distant past of the system and I don't know what will happen in the distant future, but I guaranteed what will happen just before, during and just after the current step in time ". The special mention of invariants through time is the exception that proves the rule; in a denotational view the whole system is invariant through time.

In this framework, you can see that it is pretty hard to imagine what the equivalent in logic for axiomatic semantics would be, unless you were dealing with an explicitly temporal logic with well-defined "steps". Logics tend to have a "global" outlook.

Well...

it is pretty hard to imagine what the equivalent in logic for axiomatic semantics would be, unless you were dealing with an explicitly temporal logic with well-defined "steps". Logics tend to have a "global" outlook.

Either Propositional Dynamic Logic in general, or rewriting logic in particular (which has, together with Maude been mentioned quite a few times here and is long overdue for having own story) would seem to fit the bill. Maude really is useful in reasoning about computations.

A difference of scope

Either Propositional Dynamic Logic in general, or rewriting logic in particular

I agree those are examples of specific logical systems with temporal semantics equivalent to the axiomatic semantics approach.

At this point in time, though, they don't have the generality for all logics that "proof theory" or "model theory" currently do that would fit with what I assumed the OP was looking for.

I think we can agree, however, that it would be a great thing for the harmony of math, logic and computation if explicitly temporal understandings of logic did become as foundational as the other two.

We can dream... ;-)

Muddy waters

I got too low-level in the post above, which hid my wood with trees. To sloganise:

  • Structural proof theory is (imperfectly) algebraic, and algebraic is axiomatic, so proof theory's closest PL analog is axiomatic semantics, not operational semantics (see the point about Galois connections & adjunctions, above). Don't let the existence of proof rewriting mislead you;
  • There is no clean distinction between the three kinds of PL semantics: we know how to use any to manufacture either of the others. Granted, they are not equivalent, but in many cases the differences are window dressing.
  • The threefold distinction is outmoded. Besides corollaries of the points I made before, how about new approaches to PL semantics, like game semantics and Pitts' algebraic operational semantics?
  • There is no clean distinction between the two kinds of logical semantics: Prawitz's thesis says that structural proof theory is a theory of denotations: the denotation of a proposition/type is the collection of the matching closed proofs/terms. Likewise, the T-schema for a model theory describes an inference system that may be analysed using proof-theoretic tools.

You might like to take a look at Propositional Consequence Relations and Algebraic Logic if you want to look for another candidate matching axiomatic semantics.

I should emphasise that I'm not meaning to attack the intuitions tying the logical semantics to PL semantics. I'm attacking the idea that these labels describe clean, well-distinguished classes of formalisms. Even on a charitable reading, the situation is too muddy for that.

Is the question out of place then?

So your basic claim is that the entire question is out of place. I am not convinced though.

Regarding the first bullet, I am not convinced by the argument:

proof theory is axiomatic => proof theory semantics corresponds to axiomatic semantics.

which bears too much resemblance to:

An elephant is an animal => A small elephant is a small animal.

However, unlike the latter, I presume the claim for similarity of proof theory semantics and axiomatic semantics has some substance behind it, so could you explain it a bit further please?

In addition, I understand your claim for the existence of a myriad of ways to present semantics. Nevertheless, the categorization into operational and denotational semantics is well established, not outmoded, at least to my knowledge. However, I'd be happy to be proven wrong about this.

Moreover, newer approaches extend the analogy further, for example, game semantics exist for both PL and logic, and indeed are approaches similar to each other. (Since I am familiar with Pitts' algebraic OS, I have no idea if they have a counterpart in this analogy.) This extension makes the original question even stronger: why, then, isn't there a counterpart for axiomatic semantics?

Finally, you mentioned that there is no clean distinction between the semantics. However, you mentioned that the blur occurs in both PL and logic, so even the blurred distinction is in similarity. This also doesn't weaken the claim that the question is irrelevant.

Algebraic nature of proof theory

The algebraic nature of proof theory: check out Lambek's encyclopedia article I posted below.

I certainly think there are deep analogies between semantics of PL and in logic. However, I think trying to match up the three on the one side to the two of the right side just muddies the waters. The right place to look is the design ideas that lie behind the different sorts of semantics. So, for example, there is the idea of a T-schema used in both Tarskian model theory and the Scott-Strachey approach to denotational semantics. This idea isn't used, for instance, in the proof-theoretic semantics of the Curry-Howard correspondence.

The algebraic nature of


    The algebraic nature of proof theory: check out Lambek's encyclopedia article I posted below.

To my understanding, Lambek's entry summarises the various connections of category theory to logic, which certainly qualifies as "proof theory is algebraic". But I did not disagree with the claim. I only did not understand the implication:
"proof theory is algebraic/axiomaitc => proof theory semantics is similar to axiomatic semantics".

It is this implication I wish to understand, and I still don't understand it after reading Lambek's entry (with your quotes).


    I certainly think there are deep analogies between semantics of PL and in logic.

I know you do :-).
But I don't see this as muddying the water. This discussion hasn't convinced me that the following analogies are invalid:


LogicPL
proof theory semanticsoperational semantics
model theory semanticsdenotational semantics
game semanticsgame semantics
????axiomatic semantics

Marc mentioned that the relevant difference was the relation to time. Since time is explicit in axiomatic semantics and irrelevant in general for logical formulae, there is no corresponding approach to logic semantics.

Neel observed that axiomatic semantics can be viewed as predicate transformers, hence are just a special case of denotational semantics, thus the correlation continues, with axiomatic and denotational semantics taking the same slot.

But I haven't understood your resolution. It seems as if you want to render the question irrelevant, but I don't yet see how.

Finally, could you explain what you mean by T-Schema and the Scott-Strachey approach? I don't seem to relate the first to the latter, at least from the references you provided.

Two T's

But I haven't understood your resolution. It seems as if you want to render the question irrelevant, but I don't yet see how.

I deny that there is a determinate meaning to the three sorts of PL semantics. Their main point is to gesture at a body of classic work. But in modern practice they are not well separated, because the bridges between the different kinds of semantical representation are so many and so well trodden.

Finally, could you explain what you mean by T-Schema and the Scott-Strachey approach? I don't seem to relate the first to the latter, at least from the references you provided.

In tense logic one might have T being a two place relation acting on expressions and times:
    T("Ann used to cycle to work",t) iff (exists past time interval i)(forall t' in i) T("Ann cycles to work",t')

In a Scott-Stracheyesque semantics, one might have T being a three-place relation acting on programs, stores and values:
    T("x:=2; P", s, v) iff T(P, update(s,x,2), v)

The similarity is no surprise: Scott was formerly a favoured student of Tarski's and is still a great admirer of him.

Second try

I presume the claim for similarity of proof theory semantics and axiomatic semantics has some substance behind it, so could you explain it a bit further please?

Ah, indeed my first answer was not very good, so I'll try again. There's a lot of interest in the idea that proof search in substructural (read: resource sensitive) logics can provide a semantics to, on the one hand process algebras like CSP or CCS, and to temporal logics such as propositional dynamic logic. The algebraic aspect is that the logical connectives on one side are very similar to the algebraic concurrency operators on the other.

There's also work trying to formalise the stateful constructs within sequential contexts using variants of linear logic, such as in LLF, the linear logical framework.

This is the kind of concrete work I had in mind, although one can reasonably argue that this is finding a non-axiomatic semantics for, say, CSP, rather than showing a parallelism.

That's really the problem: the whole nomenclature is so confused that there are not determinate criteria, only subjective impressions, to sort out what falls under the umbrella of which semantics.

Axiomatic semantics

This question is one that took me a long time to figure out a satisfying answer to -- largely because I didn't have the background to appreciate the answer (which people had figured out decades ago!). I'll restrict my discussion to imperative programs, because the connections are much easier to see when you don't also have to keep track of all of the ramifications of higher-order, too.

The critical idea behind axiomatic semantics is the predicate transformer. The idea is that a program is an action, which changes the world. We describe the world in terms of predicates, and so a program can be understood as a predicate transformer -- the meaning of a program tells you how to update your description of the world when you run your program. That is, it's a function from predicates to predicates. So

[[e]]_p : Predicates -> Predicates

This is connected to denotational semantics via the CPS-transform. The slogan is "predicate transformer semantics is a continuation semantics where the answer type is the type of truth-values". In a standard denotational semantics, we describe the semantics of an action as a transformer on states. Therefore, commands have the
type:

[[e]]_d : State -> State

Now, note that a predicate tells us whether a given world -- a state -- holds or not. Therefore, a predicate is a function from states to truth values: that is, Predicate == State -> O, where O is the type of truth values. So this means that the type of the predicate transformer can be viewed as:

[[e]]_p : (State -> O) -> (State -> O)

When you work out the details, you can show that a predicate transformer semantics and a direct denotational semantics coincide when the predicate transformer semantics is a CPS-conversion of the direct denotational semantics.

The connection to the operational view of programs is more abstract, though extremely philosophically elegant. Now, predicates are descriptions some program states. We can now ask, what facts about programs are actually physically observable? Well, if we take "observe" to mean what we can learn by running a program, then the simplest property we can think of is whether it halts or not. For any program, we can always tell whether it halts if we are willing to wait long enough. However, we cannot establish it loops just by running it, because we can't be sure that we need to run it just a little longer. So termination is a semi-decidable property; we can prove termination by running a program, but we can't prove non-termination. This asymmetry means that we can't establish arbitrary properties of programs by observing them. We can establish finite conjunctions and disjunctions of testable properties (perhaps by running the same program on multiple computers), as well as existential claims (there is a integer-valued value such that the program halts). But we can't tests universal claims (the program halts for all integer inputs).

It turns out that these kinds of operations correspond exactly to the allowed operations in topology. So the slogan here is "data types are topological spaces". In fact, I'm telling the story backwards. It was this very topological structure (of the logic of observable properties) that led Dana Scott to invent domain theory and denotational semantics. M.B. Smyth (I think) subsequently returned to this idea in order to relate predicate transformers (ie, axiomatic semantics) back to denotational methods.

I think higher-order doesn't make anything I said false, but it does make it a lot more complex -- the connections to proof theory become a lot more visible at this point, because the proof-theoretic semantics of logic can be expressed using realizability interpretations, and these are closely linked to the proof technique of logical relations.

Excellent post

In 1994 I attended several seminars in a series organised, I think, jointly by Tony Hoare and Lincoln Wallen, and which was about "Unified semantics", which meant approaches to semantics that married axiomatic, denotational and operational approaches. I found the seminars very hard going, since they seemed to be mostly occupied by slightly incompatible impressions as to what these semantics were.

Had we a few print-out's of neelk's post back then, that could be distributed to all of the participants, I think that the seminar series could have been more productive.

One perennial point of confusion is not cleared up by neelk's post: what is operational semantics? It is clear that Strachey used the term in a different way than Neel uses it above; I think Neel's usage is derived from but not quite the same as Plotkin's, which has an SOS consist of both a 'big step' and a 'little step' reduction relation: without the latter, I don't think one is sure that the semantics corresponds to something a machine can do. I think I'd call this a quasi-operational semantics.

My perennial favourite preprint (Streicher&Reus) shows one can read off an abstract machine from the T-schema for a model in Scott's D_inf model, so giving a real operational semantics. This paper should be much more influential than it is. It contains the germ of a case as to why full abstraction is not an important property for denotational semantics, if we have Stracheyesque reasons for doing semantics.

Operational semantics

Could you say something about the difference between Plotkin and Strachey's view of operational semantics? I don't actually know how they differ (or even what they are!), and I'm curious about it.

In fact, I told the story the way I did, precisely because I'm not honestly sure what justifies calling something an operational semantics. The observational/topological pov is agnostic on this point: all it asks is that you be able to execute a program --- it doesn't say anything about how it executes.

It seems to me that is a continuum of views. At one end, an operational semantics should be a transition semantics in which each transition takes at most a constant amount of work, and a constant, local amount of communication. Basically, this keeps us close to the intuitions about what primitive operations a machine can perform. But this is very strict; even the standard small-step semantics of the lambda-calculus doesn't qualify because it uses substitution, which can take an arbitrary amount of work.

On the other extreme, we could allow any computable function to serve as an operational semantics. Here, we're thinking in terms of the Church-Turing thesis, and so we are only asking that some machine be able to perform our computation. This has the benefit that we can take any constructive description of a language and view it as defining a semantics.

For functional languages, the former point of view seems to lead to things like proof nets and explicit substitution calculi. The latter point of view seems to lead to getting normalization functions out of things like cut-elimination theorems. Connecting these views is of course a whole industry, but I don't know of any general story of when and why it's possible.

For non-functional languages, like pi-calculus or imperative languages, I understand even less about how things fit together.

Plotkin's Operational Semantics

Assuming his views on Structural Operational Semantics (SOS) haven't changed, his A Structural Approach to Operational Semantics explains them (bottom of page 3, my emphasis):


    The announced “symbol-pushing” nature of our method suggests what is the truth; it is an operational method of specifying semantics based on syntactic transformations of programs and simple operations on discrete data. The idea is that in general one should be interested in computer systems whether hardware or software and for semantics one thinks of systems whose configurations are a mixture of syntactical objects --- the programs and data --- such as stores or environments. Thus in these notes we have:

      SYSTEM = PROGRAM + DATA


    One wonders if this study could be generalised to other kinds of systems, especially hardware ones.
    Clearly systems have some behaviour and it is that which we wish to describe. In an operational semantics one focuses on the operations the system can perform – whether internally or interactively with some supersystem or the outside world. For in our discrete (digital) computer systems behaviour consists of elementary steps which are occurrences of operations. Such elementary steps are called here, (and also in many other situations in Computer Science) transitions (= moves). Thus a transition steps from one configuration to another and as a first idea we take it to be a binary relation between configurations.

Thus, the continuum of operational semantics that Neel have mentioned falls entirely under "operational semantics" (a la Plotkin), differing only in granularity of transitions. Plotkin also mentions this (in a note on page 33):


    In formulating the rule for assignment we have considered the entire evaluation of the right-hand-side as part of one execution step. This corresponds to a change in view of the size of our step when considering commands, but we could just as well have chosen otherwise.

Thus, choosing the step-size depends on the application, but doesn't disqualifies the semantics from being operational.

I have no idea about Streichey's approach to OS though.

Thanks & big steps

Thank you for posting this. I guess I am obliged to post something about Strachey's view. One thing:

Thus, choosing the step-size depends on the application, but doesn't disqualifies the semantics from being operational.
I think it does: with only big-step semantics, one is not giving anything that obviously constitutes a guide to building a machine to perform the computation; small-step semantics is much closer.

It is true that one can regard the big-step semantics as the kind of goal-oriented computation that could be performed by a Prolog-like language, but I don't think one can normally say that this is the machine that is intended by the semantics.

Step Size


    with only big-step semantics, one is not giving anything that obviously constitutes a guide to building a machine to perform the computation; small-step semantics is much closer.

If by "building" you mean "building a physical machine", then yes, small-step semantics are closer in nature. However, remember that we are talking about Plotkin's approach to OS, which is not "OS consists of building a physical machine to execute the language", but rather "OS consists of a transition system whose behaviour gives meaning to the language" (my wording).

Besides, in some big-step semantics the behaviour you looked for can be recovered from each step. The proof for a transition becomes the explanation to how that step is executed by the system internally (see diagrams on page 34 of the above document). Sometimes this proof can be deterministically recovered from the transition, which means that we still have a method to build a physical machine, while retaining big step semantics.

Some transition systems are more equal than others

The big step relation is normally semidecidable, whilst the small step relation is decidable, typically with very low complexity. There is nothing that can obviously described as operations or transitions in the big step semantics except the inference steps themselves.

I did allude to mechanically recovering small step semantics from big step semantics in my grandparent post: yes, everything you say is so, and it is possible to give big step semantics in a manner that is so constrained that one ensures the existence of the small step semantics. Still, there is a difference between hoping that one's semantics is operational and knowing it.

Hiding Parts of the How

Yes, in big-step semantics the "how" is hidden in the single transition the system makes for any program. The reason it is hidden is that it is irrelevant for that application. Had it mattered, smaller-step semantics should have been used.

So you describe the operations in the granularity that matters to you. If you want to extract a machine, use finer steps. If you want a more abstract view, use coarser steps. Still, you give meaning to the language by the operation of some transition system, which what makes it OS according to Plotkin.


Still, there is a difference between hoping that one's semantics is operational and knowing it.

Again, you take for granted that "operational semantics" means "being able to extract a physical machine", which is not the case with Plotkin. Is this the difference you were referring to?

Difference

Is this the difference you were referring to?
Yes

Computational Epistemology

We can now ask, what facts about programs are actually physically observable?

This is the crux of my analysis that the denotational/operational distinction is really a temporal one.

In a pure mathematical world, it is possible to observe the outcome of an infinite process. In a computational world, it is not.

You can model this distinction of knowledge in a purely mathematical way with certain simplifications, and sometimes it is very fruitful to do so.

But at base computation is a time-limited epistemology of mathematics.

I think one of the reasons that the definition of operational semantics is so fuzzy in some accounts(as discussed by Charles elsewhere in this thread) is exactly because if you try to "pure-mathematize" the distinction between denotational and operational, it ultimately vanishes.

Categorical semantics of logic

Joachim Lambek (2001). Categorical logic. Entry in Encyclopaedia of Mathematics. Springer-Verlag.

Quotes:

  • In fact, it may be claimed that, at a very basic level, logic and category theory are the same.
  • Categorical proof theory: One is thus led to the traditional typed lambda-calculus of H.B. Curry and A. Church, although these authors dealt with the proof theory of implication alone, having followed D. Hilbert in applying Occam's razor to eliminate conjunction from the propositional calculus. However, viewing implication as an adjoint to conjunction, some people now prefer to study the lambda-calculus with surjective pairing, which turns out to be equivalent to a Cartesian closed category.
  • Multi-categories: Gentzen had imposed three structural rules: interchange, contraction and weakening; but the cut-elimination theorem holds even without them, giving rise to what has been called substructural logic. A multi-category satisfying the structural rules is nothing else than a many-sorted algebraic theory. A multi-category not satisfying them is a context-free recognition grammar . Thus, multi-categories may be applied in linguistics. More surprisingly, they have recently found applications in theoretical physics.
  • Categorical model theory: Completeness theorems in logic assert that certain theories have enough models (cf. also Gödel completeness theorem). For classical first-order logic such a result is due to K. Gödel and A.I. Mal'tsev [. . .]For classical higher-order logic, completeness was established by L.A. Henkin [. . .] Joyal's theorem is equivalent to that of Gödel and Mal'tsev. [. . .] The intuitionistic generalization of Henkin's completeness theorem can be expressed algebraically thus [a6]: Every small topos can be fully embedded by a logical functor into a product of local toposes.

Historical/Conceptual perspective

The following quote might be relevant for the framing the current discussion:

Given any object, relatively abstracted from its surroundings for study, the behavioristic approach consists in the examination of the output of the object and of the relations of this output to the input. By output is meant any change produced in the surroundings by the object. By input, conversely, is meant any event external to the object that modifies this object in any manner.

The above statement of what is meant by the behavioristic method of study omits the specific structure and the intrinsic organization of the object. This omission is fundamental because on it is based the distinction between the behavioristic and the alternative functional method of study. In a functional analysis, as opposed to a behavioristic approach, the main goal is the intrinsic organization of the entity studied, its structure and its properties; the relations between the object and the surroundings are relatively incidental.

-- Rosenblueth, Weiner and Bigelow (1943). Behavior, Purpose and Teleology.

Calculus in Coinductive Form

Based on the above discussion there are accepted ways to represent the semantics of "steps". If we have a step semantics and we need to introduce time then what we need is a computer semantics of calculus. There is at least one suggestion on how to do this: Calculus in Coinductive Form. Using this method we could conceivably define the Laplace or Z transform of a computer program. This would formally integrate Linear Systems Theory and software semantics.