Programming Languages as Mathematical Representations

Hi Folks,

To those of you with math backgrounds....

A random conversation, about math and machine learning, led me to wondering - LISP & lambda calculus aside - is it reasonable to view programming languages as mathematical representations?

After all - a program represents a set of logical/mathematical relationships - be it descriptive, a model, or a series of operations - not unlike the way a set of differential equations can model a physical system (and be used to predict behavior).

Are there fundamental differences, beyond the symbology and grammars, that differentiate a set of equations from a program?

I ask this partly out of intellectual curiosity, and partly because, when it comes to analyzing and modeling systems, my mind tends to think in terms of code, rather than formulas - and I kind of wonder if there's really something fundamentally different about the thought process, or is it more akin to the difference between, say, differential and integral forms?



Miles Fidelman

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.


That depends on what you consider being math. If I'm asked, depending on chosen generality, math could be seen as a number operation system at the lowest level, a set description system at the middle level, or a logic foundation system at higher levels. To describe programming, you should go up to the most general level, that is higher order logic, which brings in lambda calculus.

Firstly, you need to know what is a programming language. I think it can be described as a theory about systems found in the Universe. Moreover, a system itself could represent another specific theory (like that an assembler could hold Java programming language, for example).

So, to describe programming languages, if you want a complete solution, you need a theory about theories (that is, higher order logic), not just a theory about numbers. Theory about theories represents a glue between pure number formulas.

Does a theory about theories counts as math? I don't know for sure, but it can hold math described within. It would be reasonable to consider math as just a specific area of theory about theories, but then again, math includes the set theory and mathematical logic, which brings math up through the knowledge scale.


Yes, a computer program is maths, and what Turing did was extend mathematics with concepts of state, enabling a new class of problems to be solved.

programming intent

It seems perhaps the distinction here has to do with intent. I've been noticing lately some important distinctions are a matter of intent prior to the sort of formal structure one brings to bear on problems. (Examples: rewriting calculi versus grammars, where the purpose of one is to converge on a single result while the purpose of the other is to survey the geography of possible destinations reachable from a given starting point; or interpreted PLs versus compiled PLs, where one understands programs as data while the other seeks to factor source-code out of the equation.)

I'd suggest there are a couple of distinctions to be made here. One is imperative versus declarative; an imperative program intends to specify what is to be done, while a declarative program intends to describe what is to be done. Another is direct specification versus indirect constraint; a program is often meant to specify things directly, whereas a set of equations is commonly a set of constraints that one then needs to figure out how to use in identifying a solution. Imperative programs seem rather inherently direct. A declarative program may be either direct or indirect, I think. The indirect approach may turn out to underconstrain or overconstrain the system, resulting in multiple solutions or no solutions respectively.


Imperative Algorithms

Mathematics is full of imperative algorithms, including Egyptian multiplication, which was written down in the Rhind Papyrus by mathematicians over three and a half thousand years ago. Euclid's algorithm for the greatest common divisor etc. These are imperative algorithms, yet form the very core of mathematics as expressed in Euclid's Elements.

math functions

Take functions, for example, and let's take a look at the definition of factorial:

f(n) = n * f(n - 1) if n > 0
f(n) = 1 if n == 0

With addition of "if", mathematics turns into real little functional language, right? So you can finally program whatever you can with functional languages.

good example

That really is a fundamental example, isn't it?

high school example, I think

If I remember correctly, they are teaching it in high school. We also learned a bit of mathematical logic, so "if" can be replaced with -> operator.

But, I think, what is really happening is that classical math leaves a lot of unsaid common sense conclusions that float up when you try to completely formalize them with programming languages. I think it is that magical glue (theory about theories) from my above post that is needed to connect all the booleans, algebra, differential calculus, set operations and gods know what else that comes with classical math. I'm talking about that bit of common sense that we use naturally and take for granted, without even thinking about it.

human conversation

Seemingly, the flexibility of mathematics in contrast to programming comes from the fact that mathematical proof is a conversation between human beings, whereas programming is usually an attempt to engage in a "conversation" with a computer. Computers don't think, and so the programmer typically descends to the computer's level. I've noted before that abstraction tends to be easy to do in mathematics but difficult in programming; abstraction is really a human conceptual thing, and trying to explain it to a computer is an exercise in frustration. I suspect the difference between human conversation and human-computer interaction is also responsible for the longevity of Lisp: Lisp minimizes the conversation with the computer. There's much emphasis nowadays on complex mathematical support infrastructure for languages — type systems, especially — which reduces flexibility because it's all "conversation" with the computer; but most Lisps, exactly because they lack this rigid mathematical support infrastructure, are potentially more like mathematics.

Problems with maths

I would say most of the problems with maths are caused by the implicitness and informality of human conversation.


I would say most of the problems with maths are caused by the implicitness and informality of human conversation.

I wouldn't so much disagree with the statement itself as object to [well, okay, be wary of :-) ] the apparent emphasis. Try this as a complement: the strengths of mathematics are caused by the flexibility and adaptivity of human thought. Here's another related proposition: the problems with type systems are caused by their explicitness and formality.

For a long time now we've had a strong economic motive to play up how "smart" our technology is. Also, for the past century and more we've been motivated to play up how "smart" evolution is, because we've been trying to get across to people why evolution is powerful enough to explain things traditionally explained by religion. We've had no comparably strong motive to figure out what makes us special, and when we did try to investigate it we've tended to focus on our ability to do particular tasks over and over (easy to test), which I suspect is entirely the wrong place to look for our essential strengths. The result of all this is that we've ended up denigrating ourselves, convincing ourselves that there really isn't anything special about us. Except it's not true. We got here because we're good at things that are prohibitively difficult to do by other means. Now we're on the brink (or, the cusp if you're an optimist) of giving up control of our civilization to things that aren't human; and if we do that while we're busy not noticing what we can do that these other things can't do, the new management won't be up to the task. That could just be the explanation for the Fermi paradox; civilizations get just so far, then forget to value sapience and effectively self-destruct.

[note interpolated]

throughput vs. correctness

There are 2 kinds of people in the world: Those who prefer slower more probably really actually provenly correct answers vs. Those who prefer more of everything even if it is potentially wrong since it wasn't as nailed down as it could have been.


I doubt that. It seems a bit like saying there are two kinds of people, those who have five fingers on their right hand and those who have six toes on their left foot. Though of statements of this form, I rather like There are two kinds of people in the world, those who divide people into two kinds and those who don't.

tongue in cheek

any "there are n kinds of people in the world" statements are i assumed universally seen as never really serious. apologies for not putting in a smiley disclaimer originally :-)


Humor is amazingly difficult to convey in text. Even if one knows there's something meant to be humorous, it can be hard to know which aspect of things is meant that way. Open up the bandwidth to its maximum extent by actually being in the same room with someone looking at them while they talk to you, and (if you're a native speaker of the language) you can pick right up on nuances that you'd be hard put to even enumerate afterwards.


More practically, I suggest there are two goals here (amongst others, of course), to prove that things have been expressed correctly, and to express things at all. We don't nearly fully understand the interplay between the two, and if we want to learn about either we may have to temporarily bump up its priority relative to the other.

dogs playing chess

Yup. A lot of things in history were done by hacking them up and leaving them thus for a long time. They work, but are ugly and kinda just poor, even if sorta amazingly surprising at the same time. Then we hope somebody smart can see the right abstraction underlying it all and fix it for us. :-)

thinking previously impossible thoughts

Heh. Possibly not what I had in mind; not sure, actually. I'm concerned with expressing ideas, in part perhaps I'm thinking of the Dijkstra quote about Lisp "assist[ing] a number of our most gifted fellow humans in thinking previously impossible thoughts." In what you describe, it's unclear whether or not the earlier thing involves expressing an idea, but the latter presumably does — evolution, which I see as a classic example of an immensely powerful engine that nonetheless fundamentally lacks that something special that makes us sapient, might do the first part about finding something that works, but would never find an abstraction underlying it.

substrate vs. goal

I think it is entirely correct to view programming languages as mathematical representations, whether or not they are imperative. In either case, programming languages describe abstract operations in an idealized world of abstract entities, which seems sufficiently mathematical to me.

I think the best way to describe the distinction between *doing* programming and *doing* mathematics is primarily one of intent. From this point of view, you might consider programming a form of *applied* mathematics, as distinct from the pure stuff.

As with any mathematical notation, we tend to adapt the form of our notation to suit our intent, so it is not surprising that most programming languages don't look like the conventional notation for mathematics -- particularly since foremost among the requirements of programming is that the notation be executable by computer.

Also, most mathematical notation was originally formulated before computers were available to execute it. So even the most mechanically executable math notation (like formal logic and lambda calculus) was originally intended to communicate ideas between people, rather than supply a practical framework for an applied execution which didn't yet exist.


you might consider programming a form of *applied* mathematics, as distinct from the pure stuff.

Yes. I've maintained for years (and am certainly not the first) that theoretical computer science is a relatively pure branch of mathematics; as the flip side of that, it feels right to view programming as applied math.

Andrej Bauer's take on

Andrej Bauer's recent take on mathematics and computation, and the accompanying reddit thread. It coincidentally followed a paper I had come across a few days prior titled Automata Theory Approach to Predicate Intuitionistic Logic, so it seems there's a few people thinking along similar lines of more intimately linking proofs and computation.

You can also flip the question and see math as programming

You can also flip the the question and see math as a simplified form of programming which handles state badly.

If math can be programmed in Turing machine

... can Turing machine be programmed in math?

state of mathematics

I'm not convinced math has to handle state badly. I think the math we've developed handles it badly. What does math that handles state well looks like? Good question. Probably not like anything we have atm.