Why say Actor Model instead of message passing?

My system uses message passing in it's highest level abstractions and this does provide concurrency, however, the Actor Model is only a specific instance of message passing. The Actor Model has the following characteristics:

(these quotes are from Wikipedia)

  1. send a finite number of messages to other actors
  2. create a finite number of new actors
  3. designate the behavior to be used for the next message it receives

The use of the word "finite" is redundant when used in a CS context as everything is finite except for a bug called an infinite loop. Although my message passing can create new "actors" and send multiple synchronous or asynchronous messages, it is only through changing the persistent state of my actors that one message can effect another (point 3). In general, my system tries to keep interaction between messages to the same "actor", to a minimum.

The Actor Model specifically says that messages can arrive in arbitrary order while my system will always deliver messages to the same actor in the order they were sent.

The Actor Model didn't guarantee that the recipient of the message actually exists or that they got the message. My system makes sure that the message can be delivered before it is sent or an error message is the result.

The Actor Model didn't queue multiple messages addressed to a single actor. Without this system guarantee, how would you know if your messages that normally work, would always work regardless of the work load? With the Actor Model, you would have to create your own protocol to make it useable. My system queues all messages sent to all actors.

In the Actor Model, the "return address" for a message was left up to the person sending the message. The Actor Model only incorporated messages in 1 direction rather than having automatic return messages and the choice of "synchronous or asynchronous" messages. My system incorporates 3 different kinds of message right in the syntax of the language so that it is obvious what kind of message is being sent.

The Actor Model, as defined by Mr Hewitt in 1973, said it was "inspired by physics including general relativity and quantum mechanics". I created all the rules of my message passing without any knowledge of the Actor Model or in fact what kind of message passing other languages use. My inspiration was just common sense and programming experience. I am not taking credit for inventing message passing but I am saying that message passing is a normal and obvious Computer Science concept rather than a Mathematical one, regardless of who first published the idea (sort of).

My recommendation is that the word "Message Passing" be used rather than the "Actor Model" when talking about message passing and concurrency UNLESS all the features of the Actor Model are used.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Lambda calculus doesn't have Unicode support

To whom are your title question and closing recommendation directed? I don't think message passing is commonly understood to refer to the actor model.

Also, the Actor model is a mathematical model of computation and not a programming language. Many of the comparison's you've made to your language don't make sense for that reason.

Doesn't make sense?

I was looking through many posts and I found one that complained about the lack of concurrency in many modern programming languages. The post suggested using the Actor Model for implementing concurrency, hence this post and the fact that my concurrency is based on message passing.

If computers had never existed, would the Actor Model have been invented? If you say "no" then it is not Math but Computer Science (CS) described in Math syntax. If you say "yes", then how could it have ever been used?

I am not a Mathematician, I work on computer systems. Your response is exactly why Math shouldn't be confused with CS.

CS has algorithms and concepts that make all the special problems we have using computers, work. Pure Math belongs in a world of it's own where things can be infinite or other worldly. I strongly dislike FP for many of these same reasons. Why try to fit a square peg into a round whole when CS already has excellent tools without the Math?

In CS, we have an unambiguous "God" that decides if things are correct or not and that is our CPUs. "Is it useful?" and "does the computer execute it correctly?" are what matters rather than some Mathematical proof.

I would just add that if a Math concept is useful in CS, then I would be the first to appropriate it. Much of Physics uses Math notation but does that mean that Physics is just a sub-discipline of Math? Math is used in almost all Sciences but that doesn't mean that Math has a patent on explicit or correct algorithms.

CS has a perfectly unambiguous syntax for describing algorithms and it is called source code. We even have an independent third party that judges if that code is correct or not and that is a compiler and our CPUs.

Fancy book learn'n

The purpose of the Actor Model and other mathematical models is that they abstract away details of real computing systems. The lack of certain features is a feature.

While I can sympathize some with FP languages overemphasizing math notations, I don't agree with much else of your rant. I'd guess that language designers who try to distill and deeply understand the core concepts of their language (i.e. understand the math) do a better job getting the corner cases right. It's easy to dismiss the stuff you don't get as overly abstract lunacy and the work of wonks, but there are usually good reasons for it (with the exception of category theory, which is overly abstract lunacy and the work of wonks).

Just continuing the "rant".

"rant" is a good argument.

You imply that deep understanding of "the core concepts of their language" comes from Math, but I would say that deep understanding of any large piece of code requires programming experience in that domain rather than Math.

Please don't assume "stuff you don't get as overly abstract" when you have no clue what I understand or don't.

I contend that most computer code is created for business clients and I believe that people who never develop such code themselves are hardly qualified to design the language that those programmers use. I could go further and say that people that don't have extensive programming experience in the real world aren't qualified to teach said programming students either.

Have I made enough enemies yet?

Mathematicians make poor programmers.

I had a world class Math professor as a software business partner for 8 years and I personally met many Mathematicians. The sample is too small to generalize about but my experience says that real Computer Scientists consider the practical reality of the computers we run our code on and the Math types, much less so. Obviously there are many exceptions and I use Math notation whenever I feel it is useful as well. Math was my minor at University so I am not without some knowledge of that domain.

I am however, a very proud Computer Scientist that has had a long career making tools and business solutions for many companies and I am not impressed with Math elitists thinking they are the only ones that can create new and good solutions in PL design or CS.

Earlier today, I checked out an article that was linked to from this site that talked about folds. The CS concept of which is trivial but the Math was totally nonsensical. If that is what you aspire to, good luck to you.

An example was having an array of the numbers 1-5 and then "folding" that array with the plus function to get 15. I programmed summing a vector with a simple loop, the first week I started programming in 1975 but this example had each value in the "tree" replaced with a function etc.

I programmed a few thousand lines of code in APL in January of 1976 and summing a vector was just as trivial then as it is now.

Do you really think that recursion is a step up from a simple loop (thinking of FP)?

Look, I don't care

Look, I don't care if you want to build "Duck Tape: a programming language for ordinary folks" and peddle it at Walmart to Joe Sixpack who's looking to build widgets with more knobs and dials... but now I'm just trolling you. What do you expect when you show up at a forum named "lambda the ultimate" and complain that the problem with programming languages is too much math?

Anyway, you mentioned elsewhere that you've recently been through the Haskell documentation, and while I don't know which tracts you went through, it wouldn't surprise me if that's where you encountered the math elitism that you came here to do battle with. A good friend of mine had a similarly hostile reaction to Haskell.

As for the particular example of whether recursion is actually superior to looping, I think it often is. I probably won't debate this too much, but there is a very natural style of programming where the only thing you do is write down values that you know how to express. Recursion arises naturally when you have indexed values and the values at one index depend on values at other indices.

I deserved that.

You are right on all accounts.

My first recursive function was my Btree index back in 1983.

Please re-read my other post as I was very serious about a little help.

Thanks Math guy :)

I'd like to point out that

I'd like to point out that even in imperative languages, my experience is that mapping/folding makes life easier than looping, due to variable scope.

I don't think it's controversial to say that the majority of variables in loops are different on each iteration, ie. intermediate values. There are perhaps a couple of values which should persist across iterations, eg. accumulators.

The problem with loops is that all variables persist across all iterations by default, and must be explicitly overwritten with new values (or explicitly unbound, if the language permits it). This means loop variables default to the minority use-case of accumulators, forcing every other variable to be explicitly updated. It's easy to make a mistake doing this, eg. by updating variables in the wrong order. In these cases, the previous values get used silently; no type errors, warnings, etc.

On the other hand, when recursing (explicitly, or via a combinator like map or fold) all variables are local to the function. Persisting a variable to the next iteration must be done explicitly, usually by passing it through as an argument to the recursive call or, in languages with closures, by inheriting it from the parent scope. In this setting, 'updating a loop variable' becomes 'declaring a local variable', so getting the order wrong will cause a 'no such variable' error.

The other advantage is that stepping through a loop with a debugger can be very confusing, as the in-scope variables will often be inconsistent as some will still have their previous values. With recursion, everything is always consistent (if not, you've found your bug ;) ).

Even if you doubt the merits of pure functional programming, I think the advantages of recursion over loops are clear. It's now very rare that I write a loop explicitly, even though most of my programming is in 'non-Maths' languages like PHP, Javascript and Python.

I wrote a blog post about this, when someone at work asked me what 'the best way to do a for loop is' in Javascript.

What do you call a loop?

I just quickly looked at your example BUT I think a "for each" is a "special case" loop. I definitely have one in my system as well. Inside the "for each" you can put code or a function defined elsewhere. If you want your code to look even more abstracted, you can put that "for each" loop in a function and then just call the function. In my language, you can also put any piece of code in a "procedure" which is just a callable chunk of code that shares the same symbol table space as the function. So the line could look like "DO x" where "x" is an arbitrary chunk of code. I could create "macro" functions that provide customized 100% debugged source code at compile time for almost any situation. A loop is still a loop.

I also have many functions that "loop" internally and they can have local variables, you would never see them BUT explicitly defining a loop and looking at the exact pattern per iteration reads best to me.


At a glance I can see that all the elements of "xyz" will be traversed and I can see exactly what the "per element pattern" is. I can add selection criteria with an "if" statement etc. I don't see this 4 line set as 4 lines. To me, I see an iterated object with a 1 line pattern.

I don't see where hiding the details by sending a function and an object to a function is a step forward although my language can easily do that.

I think of the whole of CS as the study of "patterns" that we can put into loops. If there were no loops in programs then none would last more than a fraction of a second. Finding the patterns in programming problems is what programming is all about IMO. All I want to know about loops is how many times they will execute, and under what condition will they stop.

BTW I have the capability of not only using full functions as a parameter or as a function result but all the source code of those functions can be changed as well. I am not against arbitrary functions being passed to other functions but sometimes the simple, obvious code reads best.

I always try to create the simplest code I can and when I do anything fancy at all, I explain with extra documentation. Sometimes hiding the functionality of the code just gets in the way, when there is a problem.

I understand what you are saying about local scope and accumulators but I don't see the problem. If you want to isolate a loop from the rest of the program, why not just put it in a function and call it? In my language that is simple to do from within any function at compile or execution time.

I could easily have blocks of FP code in my source code and have a macro send it to translation function at compile time where the functional code could be translated into executable source code and then compiled inline or as a called function.

I am definitely not against higher abstractions or manipulating code within programs, I just don't see how using recursion (when the algorithm isn't recursive) instead of looping is a step forward.


Why does the following code work?


We know the answer intuitively because we've been doing this for so long, but once upon a time we had to think about why it works. And the reason it works is because it maintains the following invariant:

Invariant: After iteration n, 'sum' holds the sum of the elements 1..n.

The recursive version makes that invariant more plain, even if you're not as used to it.

sum_of_1_to_(0) = 0
sum_of_1_to_(n) = sum_of_1_to_(n-1) + element(n)

In this recursive version, there isn't a hidden "loop invariant" to figure out that makes this correct. Both of these equations are just obviously correct. The recursive version is logically simpler.

Granted, the imperative version is also trivial, but I suspect that any discomfort you feel with the recursive version is just due to your comparatively little experience writing things recursively.

I know why it works.

I know why it works and it has nothing to do with Math.

I know exactly how a CPU executes code. I know how it loads a binary string of bits from a location in memory into a register. I know the algorithm the CPU uses to simulate addition. I know that registers can count down 1 at a time and when they hit 0, they set a flag in the CPU that a conditional branch can test. When finished the "result" to stored to a memory location.

How do I know it works? I can imitate exactly what the CPU does and I get the results I expect every time. I look for any "corner conditions" that might stop the program and if there are none, then it tested ok.

If we were talking about Math, then your explanation is very correct and accurate BUT we are talking about programming and computers which I say isn't Math.

Like you, I need no proof that looping works because I have used it in every program I have ever written. For me, no additional proof is necessary. Whether you can Mathematically prove looping or not has no bearing on the usefulness of it in CS.

I see Math as a syntax to describe relations and ideas. Math is used in almost all Sciences but they are not branches of Math. I also think CS has a syntax that is equally valid in it's own right and that is code.

You say your recursive example is "logically simpler". I say that the pattern I show is exactly 1 line and my program can be understood by people that don't even know how to program. Please don't presume to know what I know or what lack of knowledge you think I have. Arrogance isn't a sign of intelligence!

Something to do with math

Somewhere in your mental simulation of the CPU, along side imagery of electrons and transistors, is the thought that "at each iteration, this register holds the sum of the preceding elements in the array". That's how you're able to come to the conclusion that the correct value is written to memory at the end of the iteration. That's simple "math" that your brain is working through. The only alternative I can see, that you're only simulating a couple of cases in your head and deciding that it works in general, would IMO be a sign of programming incompetence (and you don't sound incompetent).

My opinion is that this basic math that we're doing in our heads, regardless of whether it's formalized, should be kept as logically simple as possible. Doing so allows the brain to check it with less effort, freeing us up to handle more essential complexity. Abstracting away from the details of the CPU and reasoning about abstract values is another way to lighten the cognitive load. Of course an ability to drop down to a lower level of abstraction and think about what's going on in the machine is important, too, but thinking at a low level of abstraction all of the time is inefficient.

Also, I wouldn't assume that your non-recursive version is easier for a non-programmer to understand. The "equals" in the imperative version means "update" and not "equals", which can trip some people up. The main advantage of the imperative model of computation is that it translates more directly to what's actually going on in the machine.

Mathematics, hardware, software and the linkage.

You understand why it works and it has nothing to do with math?

Having been trained in my early days as an engineer (as well as a computer scientist), it works because it is based on a particular field of mathematics. Even the simple acts of addition, subtraction, multiplication and division using computers needs some understanding of numerical mathematics to ensure that the operations chosen and the order in which they are used don't lead the answer to be significantly wrong.

What I am reading here is that you appear to have forgotten that hardware can have mistakes in design and not do what it is meant to do. We have enough examples of this over the years in the readily available hardware used.

If you are willing, it would probably be a good idea to read the articles by PoIR over at Groklaw regarding the subject of computers and mathematics. My personal opinion is that he has done an excellent job in explaining that all software is mathematical in nature (i.e. it is mathematics) in simple to understand english (he is trying to explain to lawyers and judges who on the whole have little understanding of any technology - though there are technologically astute individuals out there). A couple of his articles are What Does "Software Is Mathematics" Mean? - Part 1 and What Does "Software Is Mathematics" Mean? - Part 2

Having written and supported much software over the years in a variety of different fields (telecommunications, billing, pharmaceutical, HR, sales and marketing, asset management, data analysis, database management, etc), understanding the mathematical principles has meant that solutions otherwise not obvious have become obvious and available. It has also meant that errors not obvious have been found and corrected. It has also shown that management reports required could not be produced because they are mathematically unsound and hence of no use.

Understanding the nature of loops is not as simple as you make it out to be. I have seen more errors generated by looping constructs because they have not been understood than by lack of understanding of recursive solutions (as a percentage).

Though loops may appear to be easily understood, they are not the simple thing they appear.

I need no proof that looping works because I have used it in every program I have ever written. For me, no additional proof is necessary. Whether you can Mathematically prove looping or not has no bearing on the usefulness of it in CS.

A simple question then: How do you know it always works? How do you prove that the loop you have created is in fact doing exactly what you want it to do?

I have seen too much code written (in production) that appears to work most of the time and when analysed it is just sheer luck that the right answers have generally appeared. But when you get the appropriate data running through the loops, it just explodes. This is generally due to the fact that the writer has not "checked" the mathematics of the loop - either didn't know how or didn't think it was necessary.

You say your recursive example is "logically simpler". I say that the pattern I show is exactly 1 line and my program can be understood by people that don't even know how to program.

A challenge: Write the Ackermann-Peter function as a set of loops. Recursive function is as follows:

Ack(0,n) = n + 1
Ack(m + 1, 0) = Ack(m, 1)
Ack(m + 1, n + 1) = Ack(m, Ack(m + 1, n))

I have somewhere in my library a paper which translates this to a form that uses loops. if I recall correctly, it ends up being a number of pages long. It was also very difficult to prove that it was a correct implementation.

There are many useful functions that when described by a recursive function is simple in form and easy to understand but translate to a form using looping constructs is difficult to understand and much more difficult to prove mathematically correct. Just look at a number of the high end sorting algorithms as examples.

Mathematics is used in many areas to prove the correctness of computer systems and it is an area that is proving difficult because it is difficult.

Please don't presume to know what I know or what lack of knowledge you think I have. Arrogance isn't a sign of intelligence!

The only basis we can look at what you may or not know is what you write. And what you write is indicating that you have not at least investigated some of the areas being raised in response to your comments. The other point that goes with this is that when a comment is made, don't just assume that they are attacking you personally. All that says is that you (the general you not the specific you) are looking for a fight and the result will be that you will end up being ignored.

I myself do not have the background of many who respond here and in many areas could not shake a stick at them. I disagree with some of the comments made here. I think that certain areas or fields have gone down the wrong path and have missed better areas of approach. That is an opinion and I also recognise that there are many areas that I do not have the necessary knowledge to comment in detail on and in my own time actively pursue that knowledge to the extent that I can.

I have somewhere in my

I have somewhere in my library a paper which translates this to a form that uses loops. if I recall correctly, it ends up being a number of pages long.

def ack(m,n):
    k = []
    while True:
        if m==0:
            if k: m,n = k.pop()-1,n+1
            else: return n+1
        elif n==0: m,n = m-1,1
        else: k,n = k+[m],n-1


CPS convert -> lambda lift -> turn linked closures into array. Works for any recursive algorithm, but the result is unreadable of course.

Problem with example given

Though it appears to get the correct answer (I have another example to compare the results with and I get the same answers, codes uses recursion and memoisation for quick results), the problem is in proving that it is correct. You are relying on the fact (unstated) that an empty array is equivalent to false and non-empty array is equivalent to true. Also unstated is that k.pop() updates k. In other languages these assumptions may not be valid.

It took me some time to verify that I understood what the code was doing and that it matched the recursive code.

The particular proof I am thinking about uses no unstated assumptions which is where the length of the loop code becomes quite long.

However, I like the code you have given here, it does have a sort of prettiness about it.

Python idioms

Yes, it uses Python idioms. It's trivial to rewrite that code to use e.g. C arrays. It wouldn't become much longer either. To understand that code keep in mind that k represents the call stack. In the recursive Ack everything is in tail position except the inner Ack in Ack(m, Ack(m + 1, n)). So there is just 1 type of stack frame, namely the closure λx. Ack(m,x), which I represent with just the number m. The part k = k+[m] pushes a stack frame with value m on the stack, and k.pop() pops a stack frame.

I also have read articles.

I have read posts that say the "whole universe is Math". So what?

You presume to think I don't know the intimate details of how CPUs work but you would be wrong. I have worked at every level of detail on micro-computers since 1975. You just stating that CS is Math is no argument. I think you should get off your high horse and actually listen to some of my points and then judge accordingly. I see none of that in your post.

I am not against Math at all. I would use Math or any other algorithm from any place to solve computer problems. I think Math has enormous value, but it isn't CS. Math is used in some areas of CS but the reserve doesn't follow.

I am sure you could show some recursive examples that couldn't easily be duplicated in non recursive functions and I would agree with you. These kind of solutions don't come up very often in my experience but I normally don't program for Math projects. I personally use a few recursive routines in my current code. I do object to FP and Haskell in particular, thinking that eliminating a simple loop by writing a fake (tail call recursion) recursive function is better or simpler.

I do take insults directed at me personally but I think you are correct about "looking for a fight", although that wasn't my intentional. With hindsight, it was inevitable on this site, I can clearly see. You are correct that it serves no purpose to ask a carpenter to not think of problems as nails!

While I accept that a lot of

While I accept that a lot of code can be called 'looping', for the
specific purposes of this discussion and on my blog I'm making a
distinction about namespace/symbol table/scope/bindings.

The repeated application of a fixed sequence of statements to a single
set of variable bindings.

This is what I was referring to when I said 'looping'. Anything which
can, in general, be implemented with a single conditional jump without
affecting the semantics falls under this definition; this includes
for-loops, while-loops, block-structured GOTOs, code where all
variables are globally scoped, or dynamically/lexically inherited, and
'procedures' without their own symbol table, like you described using

The nested application of a fixed sequence of statements, such that
all variables are only accessible to their instantiation of the

This is what I'm calling 'recursion'. All data is discarded after each
call except for the return value, and each call only has access to its
arguments. This can be relaxed to allow dynamically- or
lexically-scoped variables as well, but those are the minority of

My point can't be illustrated with the single-line loops you've used,
so I'll invent a (slightly) less trivial example, in Javascript:

var p, q, r, s;
var sum = 0;
for (b in xyz) {
  q = 2.5 + b / 100;
  p = q - 10 * r;
  s = 11 + p;
  r = q + 4 * b;
  if (s > 5) {
    sum = sum + r;

This has a bug, namely that "r" is used in the definition of "p"
before it's been updated, so the previous iteration's value will be
carried over implicitly. On the first iteration r will be 'undefined'
according to Javascript, so p and s will be NaN, causing the if
condition to be false.

The insidious part of this bug is that, for all we know as
maintainers, the original coder was cleverly exploiting this property
of the loop and it might not be a bug afterall. When the loop get
into hundreds (or, occasionally, thousands) of lines, this kind of
thing is a nightmare to figure out.

Let's compare it to a recursive style:

var sum = xyz.reduce(function(sum, b) {
  var q = 2.5 + b / 100;
  var p = q - 10 * r;
  var s = 11 + p;
  var r = q + 4 * b;
  if (s > 5) {
    return sum + r;
  return sum;
}, 0);

Now Javascript will give 'ReferenceError: r not found' (if xyz
actually contains something). It's also clearer what the code is
trying to do. It's using 'reduce' (AKA 'fold'), so it's trying to
combine many values to produce one answer. What's that answer? The
return value.

It's very rare that I'll write a for/while loop, even in PHP, Python
or Javascript. Although stack space is an issue for explicit/mutual
recursion (no tail-call optimisation) it's generally never an issue
since all my loops are based on datastructures, and those languages
have stack-safe implementations of map and reduce/fold built-in.

In a language without first-class functions, like Java, the
boilerplate required for recursion combinators (map, fold, unfold,
etc.) generally makes the intent of the code less clear than a
for/while loop. However, a method is essentially a function with an
accumulator as its first argument (implied 'this' in Java, explicit
'self' in Python), so a chain of method calls on objects looks much
more like recursion than it does looping.

My intent is to argue that loops default to the dangerous and
bug-causing behaviour of implicitly persistent bindings, and force
anyone wanting the more common use-case of iteration-local variables
to explicitly override this, in a problem-specific order, if they
remember. Recursion, on the other hand, kills all data at the end of
each call, unless the programmer explicitly returns it.

I see this as similar to GOTO and the implicit fall-through of switch
cases. All three of these 'dangerous by default' constructs (loops,
GOTO, switch fall-through) are language design decisions which, if
implemented in a safe-by-default and most-common-by-default way, could
have prevented countless bugs over the past few decades.

I don't see that as abstract Maths, I see it as practical Software
Engineering discipline. Of course, this can be taken further if
desired: Once loops are replaced by recursion, most variables turn
out to be single-assignment, so we can see mutable state as a
dangerous corner-case and remove it. With an immutable language,
statically typing the single-assignment variables is an obvious safety
feature, and since most functions turn out to be pure, it's also safer
to explicitly tag those edge-cases which perform effects. With pure
functions and static types, most assertions can be verified at compile
time, which justifies using dependent types. That takes us from
Javascript to Agda in small, justified steps.

Of course if we wrote a Javascript library which required users to
pass in constructive proofs that their arguments are correct, we
wouldn't get many users. But that doesn't mean these ideas are
disconnected from the real world of everyday programming.

Splitting hairs in JavaScript

Now Javascript will give 'ReferenceError: r not found'

Actually, r is bound to undefined. It's just bound that way on every iteration this time.

Your point stands. It's easier for passersby to figure out that this code is buggy, since the function body is equivalent to "return sum".

Of course if we wrote a Javascript library which required users to pass in constructive proofs that their arguments are correct, we wouldn't get many users.

Oops, that's the kind of library (or language) I'm trying to develop right now. :) I'm not very concerned about avoiding bugs, but I would like to avoid entrenchment. I want my code to remain useful as-is, even if someone wants to upgrade its dependencies or run it on a different platform. I'm already using JavaScript because of its platform reach, so this isn't all that far of a stretch.

Oops, that's the kind of

Oops, that's the kind of library (or language) I'm trying to develop right now. :)

No offence intended ;)

My point is that passing proof objects to functions is perfectly acceptable, *once we have already accepted a progression of ever-more-rigorous safety measures* (static types, immutability, etc.). The reason this stuff can be seen as abstract nonsense is when we offer the end result without the intermediate stepping stones. To illustrate this I used the example of an onerous safety requirement in a language where code can go wrong for any number of reasons which simply aren't issues in other languages (eg. type coercion, namespace clobbering, incompatible runtime (especially now that Firefox and Chrome auto-update!), etc.).

For *the majority* of Javascript programming, generating proof objects is just jumping through hoops for no real benefit (the whole stack may fall over at any time). Hence the small number of users (compared to all Javascript users). Doesn't mean it's not very well suited to a *minority* use-case though.

Compare this to, say, Haskell. Writing a library which requires proof obligations doesn't seem too unusual, and many developers may end up using it (proportionally; there will always be far more JS devs than Haskell)

Not arguing

I agree with the general point you're making, and the reason I'm chiming in is different. You're talking about stepping stones of safety, but my desire for platform independence ("safety" from platform changes?) leads me independently toward two practices: One, take advantage of JavaScript's standardization and industry entrenchment as a pragmatic basis for platform independence. Two, seek out a more mathematically sound basis involving proof objects. I just found it interesting that you juxtaposed the same topics with a different spin on the relationship. :)

incompatible runtime (especially now that Firefox and Chrome auto-update!)

Breaking updates are frustrating, but hey, it could be worse! A single person may want to use the same app with many different devices, even over the course of a day. And orthogonally, a single app may be in demand by a wide variety of users with many different accessibility and localization needs. If the Web browser changes to a different version but the OS, hardware, and interaction method stay the same, I count that as three blessings... and yet, a thousand cuts.

Usefulness of category theory?

An abstraction can help in two ways:

  1. It lets you focus on the things that matter clear of any distractions (~= information hiding)
  2. It lets you reuse reasoning for different concrete situations (~= code reuse)

Things that are more abstract let you hide more details and let you reuse more, but let you do fewer things. For example a group is more general than a field, and when you prove something for groups it applies to more structures (more reuse) and you have to worry about fewer details, but the things that you can prove about groups in general are often fairly trivial when you look at a concrete example of a group. For example take any theorem about groups, and substitute the integers as the group, and now you have a very trivial theorem. Fields have more structure, and the things that you can prove about them are often less trivial. (there is even a theorem that says that group theory is essentially the study of bijective function composition (Cayley's theorem), so the abstraction was unnecessary in the first place -- there is a similar result in Category theory)

The advantages that an abstraction brings have to outweigh the costs. With hundreds of terms and concepts, category theory's weight is quite high. What are the advantages that outweigh this cost? In the examples that I've seen, the abstraction is so high that you can only say things that are extremely trivial when you translate them back to the concrete case you care about. Can somebody give examples of theorems in category theory, that when translated to a concrete structure we care about, are worth the cost of category theory? Or if there are none, perhaps explain where the value of category theory lies, if not in the theorems.

You could have invented category theory

Can somebody give examples of theorems in category theory, that when translated to a concrete structure we care about, are worth the cost of category theory? Or if there are none, perhaps explain where the value of category theory lies, if not in the theorems.

I'm not very well-versed in category theory myself, but I have a fun or frustrating answer for you: I think category theory is mostly a useful abstraction for people whose "concrete" problems are already mathematical in nature. Fields of mathematics don't have to have much in common, except for your observation that they can be "translated to a concrete structure we care about." Category theory provides reusable tools to reason about the translations themselves.

By asking what category theory can be used for, you're asking a category-theoretical question. :)

My take on category, from an

My take on category, from an excessively high-level perspective: Mathematics is the study of things that are, in some sense or other, well-behaved. Well-behavedness is itself well-behaved. The mathematical study of well-behavedness is category theory.

Message Passing and Actors

To answer your original question, the Actor Model assumes message passing. It is not the message that's important, the isolation and independence of the Actor is. This is why the terms should not be considered interchangeable, or why some would bristle at the utility of your recommendation.

To answer a later question "would the Actor Model have been invented" without computers? Yes. It happened. Think of chess by mail.

Look at mathematics this way. You can write a many great programs without explicit reference to mathematics. But, just as the Theory of Special Relativity relies on the Pythagorean Theorem to derive how matter and time change as velocities near the speed of light, one needs mathematics and statistics to prove or disprove hypotheses of computer science. Maybe a better analogy is playing poker. One does not need mathematics to win. Some understanding of probabilities may help risk assessment which will promote better results for a player over a long term. But, one needs mathematics to express and prove results in game theory, which may have some bearing on playing poker in the grandest sense.

One of the poster languages for the Actor Model is Erlang. You may be fascinated to read the language's history. It was developed by a team of engineers solving a very large-scale practical problem.

CS has a perfectly

CS has a perfectly unambiguous syntax for describing algorithms and it is called source code. We even have an independent third party that judges if that code is correct or not and that is a compiler and our CPUs.

The notion that source code is unambiguous makes me howl with laughter. In source code translation I deal with MANY ambiguities; things that have a definite meaning in one language and are unspecified in another, things that have static values in one language and are dynamically determined in another, things that are subtypes or supertypes or have some subset of common values relative to each other within and among languages and deciding what the conversions should be, things that have a known invariant in one language and do not respect that invariant in another and how to translate code that depends on that invariant, etc.

If source code were unambiguous then porting code between systems would be effortless. If that were true, different compilers would never give different answers unless one of them were wrong. If it were true, the size of C's int, pointer, float, long, etc would be the same on all platforms. If it were true race conditions could never occur. If it were true then dereferencing a stale pointer would have the same predictable results, every time. If it were true there would never be different answers from floating-point code depending on whether they're running on Intel or Motorola CPU's (Intel, 80-bit internal registers). If it were true, then legal C sequences like

while (c < 20)
d += (c++ + ++c + c--);

would give the same answer regardless of what compiler, optimization settings, or platform you used. If it were true then MACLISP code would never have gotten confused between dynamic and static variables, and the compiler would have given the same results as the interpreter.

And so on. I want to point out that not all of these are even programming errors. In most language implementations there are dozens, or hundreds, of perfectly legal expressions that don't have predictable or repeatable results. In most language standards there are also dozens, or hundreds, of perfectly legal expressions whose results are unspecified by the language standard and could therefore be called "programming errors." These are by no means the same set, although they usually have a fairly large common subset, and most people using the language (and just "testing" as opposed to doing the math) don't keep tabs on the difference.

Weak specifications

The points of difference you mention are points in which the AM specifies weaker conditions than your language. For example, the AM does not require messages to be delivered in order. That does not mean they will be delivered out-of-order, so an implementation that always delivers in order nevertheless satisfies the AM.

An essential feature of the AM which you don't mention is that when an actor changes its behavior, it tail-calls the new behavior: there is no way back.

Is AM the toolset that impliments my design?

I think that was your first point. I don't think that guaranteeing something (order of messages) is the same as they "maybe" in order. If they can be out of order then you must assume that worst case scenario regardless of what happens most of the time.

The fact that AM has no guaranteed response mechanism, no address validation or that the message actually got there, is also "somewhat" significant. There is a reason that TCP is used for web pages rather than UDP which just blindly sends a message.

I have read about the "tail-calls the new behavior" but I am not sure what that means exactly. In my system, messages can absolutely change the persistent state of actors (objects in my case) but what happens next is absolutely up to the programmer. It seems to me that the AM was designed to work at a much more atomic level than what I have implemented.

I don't think that

I don't think that guaranteeing something (order of messages) is the same as they "maybe" in order. If they can be out of order then you must assume that worst case scenario regardless of what happens most of the time.

It's not a protocol. It doesn't tell you that two different systems that each individually comply with the model will be able to talk to each other. The model will tell you something about the systems that fit it, that they will have certain types of behaviour.

The "I don't care whether messages are sent in order not not" is not a specification to be implemented between elements of the system, it's a requirement to be met by the entire system in order to fit the model. If a system fits the model, then the model gaurantees certain behaviour for that system.

In this case, it's a lack of requirement. It says "Your system may pass all messages in order, it may not, I don't care. Whatever the ordering of messages within your system, I can tell you other things about the behaviour of your system". It isn't "elements of your system may not expect an ordering of messages".

I used to feel that...

I once felt that "pure math" was sort of a waste of time in programming. In particular, it would just piss me off when someone "defined" something by saying a true thing about it without giving any way to actually determine its value. To me, thinking in operational semantics pretty exclusively at the time, such a definition was essentially useless for computing purposes.

That was before I started on the problem of writing software to automatically translate other software between different languages. You may reach this conclusion, eventually, through working on some different problem, but the translation issue was where I hit it.

It turns out that you have to do that hardcore theory to get a firm grip on what it is that a program in some language says, and how a program working with different primitives and structures in a different language can say "the same thing" -- indeed, what it means to say "the same thing." At some point you have to build a denotational semantics of the function or relation that relates the input (all of it) to the output (all of it).

The rules of most programming languages form an operational semantics. In order to figure out what's being said in denotational terms, you have to build a mathematical model of what the operational semantics mean. And to then express that denotation using a different set of operational primitives, you have to pick it apart very carefully and often in ways that don't match how it was structured in the original language. Doing the math is the only way to be sure you're getting all the corner cases.

Right now one of the best model and most useful models I can build of a program is its expression in lambda calculus. But even that misses things that work differently because of fixed-width integer wraparounds and often leads to analyses that allow mishandling of the available precision in fixed-width floating-point roundoffs. So I wind up also needing to build (mathematical, semantic) models of how the available numeric operations differ from theoretical perfect numeric operations, and express *that* as part of my semantic model of a program.

Anyway; there are useful problems you can't solve without the hardcore math. Software translation is one such problem, but you can find others.

Stating your premise doesn't make it so.

I would rather not belabor my point about the difference between CS and Math but I have written a language that contained database power tools and sold over 30,000 copies in a single year. That software is still working for some companies 30 years later.

I wrote a word processor in over 40,000 lines of assembler for a Z80 processor in 1979. I have written device drivers, I even invented a duplicate for XML without ever seeing anything about XML or HTML etc. I say "invented" because I solved the problem without any outside information but I am sure I wasn't the first. I have also used many languages to implement application programs in at least 1,000 plus projects for at least 65 companies.

I have never used anything but elementary Math in my compiler writing, systems or application code, even though it was my minor at University.

I can trivially and exactly describe how my language works without resorting to any Math at all.

This is the second post that says that Math is required for "corner cases". I have programmed the "corner cases" in every program I have ever written. All programs have "corner cases" and as a professional programmer, I have to know what those are to fully test the code. That is programming not Math. It is possible that some code optimizations might be helped with some Math. If that is the case, I would say that optimizing code might be a specialization of PL writing. I have written thousands of lines of assembler, wrote my own one pass assembler and a symbolic debugger so I know what it takes to juggle registers etc. At this time I am not writing a JIT compiler for my system so it is possible that that work would need more Math but I am not convinced of that.

You say "Anyway; there are useful problems you can't solve without the hardcore math. Software translation is one such problem, but you can find others."

I am proof that your "theory" is wrong. It only takes one instance that fails your premise for an absolute statement like the above, to be proven wrong.

BTW I don't use a code tree for parsing. I use stacks instead. BTW It still works.

Let's just agree to disagree and then I won't be so annoying to the regulars on this list.

Staing your premise doesn't make it so

For all your stated achievements, how do you "know" that what you have written was correct? What techniques did you use to verify the programs and system thus created?

I have met quite a few programmers who way out stripped me in their coding, but the lack of mathematical knowledge on the part of some of them hampered them in creating a better solution. Those that had the mathematical knowledge were able to produce much better software.

This is the second post that says that Math is required for "corner cases". I have programmed the "corner cases" in every program I have ever written. All programs have "corner cases" and as a professional programmer, I have to know what those are to fully test the code.

I have come across many professional programmers who know best but they have not been subject matter experts and have missed many of the "corner cases". As a professional programmer, my job in part is to discuss with my client what they consider to be the corner cases and then extrapolate to other possible "corner cases" in the code based on their subject matter expertise. This takes a lot of hard work.

As a programmer, the best you initially have is the design documents and these never cover the required knowledge needed for the system. So much is left out.

I am proof that your "theory" is wrong. It only takes one instance that fails your premise for an absolute statement like the above, to be proven wrong.

You're not "proof" that his "theory" is wrong as we don't have any verification that what you have done is in fact correct. In matters like this the rubber has to hit the road.

BTW I don't use a code tree for parsing. I use stacks instead. BTW It still works.

Stack based parsing is mathematical in nature. There is a whole branch of theory that deals with this subject. So even if you don't know the theory and are just using the methods, it is still based in mathematics.

As well, since we haven't seen it, we can't determine if what you have written works or not.

Stories aren't arguments.

I prove my code by testing it, just like all programmers do. I obviously have many methods of doing that like all other programmers do.

If I implement a totally provable (Mathematically provable) algorithmic which isn't appropriate to my situation or solution, is it still perfectly correct? I don't know about you, but syntax errors take next to none of my programming time.

Professional programmers have a lot of expertise in getting the "subject matter experts" to reveal the information that is important to the system and to the expert. In most cases, I would look for the corner cases before the so called expert but developing code is a dialog rather than a monologue.

Are you suggesting that my language/system isn't still being used or that Paperback Software (our publisher) didn't sell 30,000 copies of my code in 1987?

I don't consider a stack to be Math. If Math also uses a stack then great, it works for me as well. I couldn't care less about stack theory. I know how they work on a CPU and I have implemented them myself for more years than you have been alive.

I consider your last comment over the top and I won't waste any more of my time with your posts.

Agreed: Stories aren't arguments

It is up to you whether or not you think something is over the top. But at this point, you have not provided any evidence for many of your statements.

Professional programmers have a lot of expertise in getting the "subject matter experts" to reveal the information that is important to the system and to the expert.

Sadly, I disagree. So often I wish this was the case. But over many years, I have found that the opposite is true. I have come across masters who have got/can get the required information. I have tried to learn from them. But they seem to be few and far between. My experience (coming in after the fact) has been that the professional programmers supply systems that they think is needed and is easy for them to program and require the end users to change how they do things on the premise that the professional programmers know best.

I know it can take a lot of hard yakka to determine and hence program the system in the way that will make life easier for all who will use the new system. There are many factors that work against this.

Are you suggesting that my language/system isn't still being used or that Paperback Software (our publisher) didn't sell 30,000 copies of my code in 1987?

Not at all, but you have not given the name of your language or system for anyone to look at.

I have had (to my dismay) to correct, maintain and enhance other "professional programmers" work over many years and their lack of knowledge has been evident in their work.

I am happy to place you in the same group, if you so desire, as you don't seem willing to learn from the masters who are on this site. Note, I am not one of those masters, just an old white fella trying to learn.

I know how they work on a CPU and I have implemented them myself for more years than you have been alive.

<What I hope is a Humorous response on>

"The day the earth stood still." Oooh, family stories. Aren't grand-children wonderful? But don't they wear you out? Particularly when chasing the sheep around the paddock. Do you have great-grand-children yet? I am looking forward to that? Creaking knees mean I canna walk down 30 stories any more. Everybody behind me will buuuurn. Means I gotta take the lift during a fire. Do you have to do that too?

Mind you I have had various groups of young people I have worked with over the years putting my age at very much under my true age even with my very obvious grey hair.

I just love some of these assumptions about age - I at least try to find out how old someone is before making such a statement. LOL.

</Humour off>

Confidence and distinctions

If I implement a totally provable (Mathematically provable) algorithmic which isn't appropriate to my situation or solution, is it still perfectly correct?

Mathematical proof can provide very comprehensive evidence that an algorithm is actually as useful as we think it is, but only if that usefulness (or some formal approximation of usefulness) is what we set out to prove. If we've successfully proven our algorithm is useful, but it's actually inappropriate to our situation, then we must have a messed-up definition of "useful."

The way I see it, mathematical proofs are an extreme kind of unit test. The proof verifier can run alongside other tests, and yet the results it gives can tell us an algorithm works as we expect in all cases, rather than only the cases we've been able to test so far. As with other test coverage, our proof coverage doesn't necessarily have to be airtight for our product to work in practice, especially if we have other sources of confidence: Deep technical knowhow, an open dialogue with the people imposing the requirements, and/or the ability to update the product after release. If some of these other sources of confidence have been sufficient for you, that's fine... at least for you.

The cost it takes to train a programmer in advanced mathematics and proof assistants may or may not be worth the extra degrees of quality assurance they learn how to provide. I think it depends on both the programmer in question and the kind of product they'll make. Since programmers (and mathematicians and other people) can gravitate toward tasks that fit their skills, I'll elaborate on some programming scenarios that could demand math the most.

Open dialogues and live updates may be valuable, but they're not cheap to come by in some cases of product development. For example, dialogues and updates become hard when each user expects the product to be a reliable platform for their own independent creations (e.g. when the users are programmers!). They're also hard when the number of users is especially large (e.g. all the users of a very popular platform) or the number of maintainers is especially small (e.g. a single developer who has mostly finished this project and has moved on to another). In cases like these, where dialogue and updates are expensive, knowhow and testing are particularly cost-effective. It can pay to have programmers around whose knowhow includes advanced mathematics and whose tests include comprehensive proofs.

My scenarios aren't grounded in anecdotal experience. I haven't lived long enough! But fortunately, personal stories aren't the arguments you're asking for. :)

I don't consider a stack to be Math.

I don't know if this applies to you, but when people hear "math," they often leap to thinking about arithmetic worksheets and memorizing formulas. If they lept to systems of clear specification and abstract reasoning, they'd get closer to what mathematicians do, I think. Of course, they'd also get closer to what programmers do.

You're trying to distinguish math from CS by saying that CS deals with what we can actually do on a computer, while math resides "in a world of it's own where things can be infinite or other worldly." But what about distinguishing CS from other kinds of engineering? What would you think of defining a "computer" as a machine that takes otherworldly mathematical specifications and brings them to life?

Math isn't CS

I agree that some simple CS concepts can be phrased in Math and some of those can be proven with Math as well. You say that (Math can prove that an) "algorithm works as we expect in all cases" but CS deals with the real world and physical CPUs that don't need perfect proofs that work in all cases. They only need to work in the cases you require them to. If you want to prove every single algorithm Mathematically before you use it, then, "be my guest" but you won't have much of a project.

The Actor Model is a very good example of this "Math proof" meme as, whatever the Mathematical properties that Hewitt proved, his actual description of message passing wasn't useful in the real world. His conclusions IMO were useless.

What difference does the Math make when a messaging algorithm doesn't even guarantee that the actor you are sending the message to, exists?

I have already said many times that I would use Math whenever it was useful. My head is NOT stuck in the mud as are some others I won't name. It sounds to me like I have found a forum that says "if it's not Math then it's not CS". Is it any wonder that the most useful projects in CS have nothing to do with groups that think this way?

I consider Math (as used in CS) to be a unambiguous formal method of describing an algorithm. I also define a language's source code to also be an unambiguous formal description of an algorithm. If it wasn't unambiguous and reproducible, no useful code could be generated and programming couldn't exist. The difference is that a computer can actually execute source code (when translated) but (in general) knows nothing about the Math. For the Math types, I guess that Haskell was designed for this exact reason. I have read Haskell documentation that says that errors are handled poorly and others have said that data handling is not part of the language. Did somebody forget to tell these people that in most programs, the functions are simple but the data is complex?

Would you get a Civil Engineer in to fix your plumbing or a plumber? Why would a company hire a Mathematician to design a computer system or to program one when very complement and smart CS professionals and programmers are readily available? How can 2 equally smart individuals be equal at CS when one spends 1/2 their time getting good at something else (Math in this case)? I have personal experience with software design by a world class Mathematician and a professional programer whose skill is mostly problem solving and systems. The Math professor definitely should have stuck to his Math! I will freely admit that 1 instance doesn't make the case!

Math has nothing to do with efficiency or quality code! Stating a conclusion doesn't make it correct. CS deals with a huge number of things that a Math person would have no more expertise at than the end user. If you don't know what I am talking about then you don't program systems for real end users.

Math doesn't have a patent on algorithms. A stack has been used and required by almost all CPUs that have ever been invented. Math doesn't have a monopoly on "clear specification" or "abstract reasoning". All Sciences use both of these and they also use some Math but they aren't Math. If a Chemist describes a molecule with a chemical formula, do we say that is Math? Would you say it is a "clear specification"? Why is CS the exception?

CS definitely isn't Engineering but some programs and algorithms can be usefully used from Engineering. Some Engineering ideas are used in CS just like some Math ones are.

Obviously how I view CS and Math and your views are different and further comments will go nowhere but I will add this.

I have a minor in Math. I have a major in CS. I took a "Computer Engineering" course in 2000 even though I have 37 years of experience in the CS field. I have completed over 1,000 projects for over 65 companies. I have programmed a language/database that sold over 30,000 world wide. I have programmed in at least 15 computer languages and created at least 4 others. I have worked from assembler to Powerbuilder and I even wrote multi-user code in assembler and NetBios to make my database a database server in 1990.

I have rarely used anything other than grade 12 Math in my entire career. How do you explain my experience and the lack of any importance (at least in the areas of my career) for Math in real world programming and CS design? Please don't insult my intelligence, by arguing the truthfulness of the above experience, as I don't need to lie or exaggerate to win some argument on this forum.

By the way, I have also designed every computer project I have programmed and most of the time, the actual tools I used to implement them.

What would you think of defining a "computer" as a machine that takes otherworldly mathematical specifications and brings them to life?

I am sure you and many other Math (CS wannabe's) would like CS to be distilled into a Math formula but unfortunately if you actually knew anything about real CS, you would know that CS is a lot more than just formulas.


Related: http://programmers.stackexchange.com/a/137075

Interesting read.

Thanks for posting the link - this is quite a gem.

I have never used anything

I have never used anything but elementary Math in my compiler writing, systems or application code, even though it was my minor at University.

Maths tends to get 'distilled' from solving real-world problems. Many problems can be solved without focusing on the Mathematical aspect, but in some cases it can be useful to tease it out. Having a Mathematical model of an application's domain isn't always essential, but it can help with coding, as it provides a reference which half-written code can be compared against. For example, if I'm writing a card game I can test that it's giving sensible results by making a histogram of the output and compare it to known models of playing cards.
When the application is a compiler, the domain is (higher-level) code. To make sure the implementation is sensible, we can compare the meaning of our compiler's output (registers/CPU/memory/etc.) to what our higher-level code is *supposed* to mean.

Many languages are successful without such a model. For example, the meaning of Python code is defined as 'whatever the CPython implementation would do with it'. This is fine for most developers writing code in Python, but not for developers who are writing their own Python interpreters. Jython, IronPython and PyPy are *always* broken by this definition, since they're doing different things with the CPU, registers, memory, etc. The only way they can be correct is if they *are* CPython.

Of course, dismissing Jython, IronPython and PyPy like this is unsatisfying. They do implement something like Python, but how can we quantify that? If we use a higher-level description, for example using call stacks, numeric operations, etc. then they *do* all implement the same language. This is what's meant by 'operational semantics'; giving code meaning in terms of the operations of some *abstract* machine. A physical machine is just a special case of operational semantics. If we only use physical machines, your Z80 applications would be 'broken' when running on a PC Z80 emulator, since the hardware is doing different things. But if we treat the Z80 as an abstract machine, which can be implemented by a physical Z80 or an emulator, then the applications are the same, as we'd expect.

Of course once we're working with abstract machines, we can make them do all kinds of nice things that aren't possible with real hardware. This is where Maths helps us, since we can use it study our abstract machines and make sure we base our language on one which has nice properties.

The alternative approach is to compare inputs and outputs. When we give our code to Jython, IronPython or PyPy, will it give the same output as CPython, regardless of what the machine's doing? This is what we mean by 'denotational semantics'. This benefits from non-elementary Maths, since it can be tricky to express useful language features (eg. exceptions) as transformations from inputs to outputs.

There are other ways to model languages too, and *none of them are necessary to create a language*. However, being unnecessary doesn't mean they're not incredibly useful.

For those of you who like FP

For those of you who like FP so much, I just want to share a little functional algorithm I wrote in 1 line of APL in January 1976. I do understand that APL isn't FP but I think APL could be considered a special purpose kind of functional language that emphasizes matrix operations.

The object of the code was to do a search and replace. Here goes.

  1. I produced an equality matrix that had the search string down the left and the string to search in across the top.
  2. I negatively rotated each row of the matrix by 1 more than the previous row by using an integer generator.
  3. This lined up all the 1s in the matrix where the 2 strings had matched
  4. I summed the columns to create a vector.
  5. Where ever there was a number that matched the string length of my search string that should be where there was a match.
  6. The problem was that all the rows wrapped to the end of the rows as well so I had to remove the last (length of search string) minus 1 from the vector.
  7. I could now break the string at the location of the proper string count, concatenate the replacement string, and then concatenate the remainder of the string minus the search string's size off the front.

If the above sounds complicated, it wasn't easy to write either but it did work and I got all the functions needed easily on 1 line of code.

There you have the most useless search and replace ever programmed using the tools that APL provides.

This is the C code I currently use for search and replace. I only show you this code because you will understand exactly what it is doing (even if you don't program in C) and it will run like a bullet (useful rather than useless).

short strreplace(char *ptr1, char *ptr2, char *ptr3)  // find string 2 in string 1 and replace with string 3
  short i=0, mlen, mlen2, mlen3;

  while (i < mlen){
    if (strcomp(&ptr1[i], ptr2, mlen2) == 0){
      if (strdelete(ptr1, i, mlen2) == 0)
      if (mlen3 > 0){
        if (strinsert(ptr1, ptr3, i) == 0)
short strinsert(char *ptr1, char *ptr2, short pos)   // insert string 2 into string 1 at pos
  short mlen, mlen2, mmove;
  char *ptr3;

  if (pos > mlen)
  ptr3=ptr1 + pos;
  memmove(ptr3 + mlen2, ptr3, mmove);
  memmove(ptr3, ptr2, mlen2);
short strdelete(char *ptr1, short pos, short mlen)  // delete mlen chars in string 1 at pos
  char *ptr2, *ptr3;
  short mlen1, mlen2;

  if (pos > mlen1 || pos + mlen > mlen1)
  ptr2=ptr1 + pos;
  ptr3=ptr2 + mlen;
  if (mlen2 <= 0)
  memmove(ptr2, ptr3, mlen2+1);

Unstated assumptions cause problematic implementation

You code has an unstated assumption which is not true of strings in general. This is that stings are terminated by a NUL character. The code will fail in unexpected ways for character data that includes 0 value bytes, which includes variations of 8, 16 and 32 bit character streams. So given buffers that contain general string data, your functions as written will give unpredictable results.

Your functions have another unstated assumption (though it is in the code) that all your strings will have a length that is equal to or less than the the maximum integer value of a short. Which is dependent on the C compiler used. It can not handle strings of length greater than this unstated value.

It may not appear much but understanding the "mathematics" of the problem is essential to getting the solution correct.

An example of not understanding this kind of problem and biting the hand that feeds it is the following.

There is a well known ERP system which allows the user to store large chunks of text within the data backend. When writing interface software to the middle tier software for this ERP system one uses the middle tier API. This system has been in use since the late 70's and early 80's.

However, there is a little quirk, which I found out only by accident about 18 or so months ago. If the character data you supply to the API is not sent in a very specific mathematical way, then the final data stored is not the data that has been sent. Even though you can vary the length of the lines stored, you have to space fill to 80 characters in a 160 character block. It calculates blocks on 160 characters, but lines are a maximum of 80 characters and you can specify lines from 1 to 80 characters. The buffer of data being sent can be any number of lines but has to be an exact integer multiple of 160.

The vast majority of users only ever see 80 character lines (system default and system interface default). At the time we needed to store 60 character lines (needed for specific reports). No documentation anywhere stated what we found. Over the period of a couple of days analysing communications logs and backend data storage, I found out what the mathematical criteria involve were and hence made sure that the code I wrote took all of this into account so that anyone using my code could specify what they wanted and I would do all the heavy lifting.

Not understanding the mathematical implications of your code (including the various assumptions) can cause unpredictable results.

Not exactly my point.

All my code that would use these functions is guaranteed to end with a "0". No string can be longer than 256 characters by definition.

Math is not required.

So a specific problem, in a specific API, makes the case that CS is Math?

I use totally different routines for data larger than 256 bytes that I have tested to 50M. I will only allow data in that buffered data type that tests correctly at it's maximum size.

What part of boundary cases do you say is Math? If what you describe here is what you call Math, then everything in CS and Physics and Chemistry and all of Science is Math as well. Let's agree to disagree on that.

The point I was making was my misuse of APL (and implying that FP can be misused as well), not the quality of a small piece of C code. I am sure that FP has a useful purpose just as APL still does.


CS has a perfectly unambiguous syntax for describing algorithms and it is called source code. We even have an independent third party that judges if that code is correct or not and that is a compiler and our CPUs.

I am surprised to hear this, and I now realise in hindsight that I did not need to waste so much of my life playing with various latex algorithms packages. This is good news. Which language can I write source code in to describe algorithms unambiguously?

Even better news than the existence of a language with well-defined meaning / behaviour on every computer platform is the news that my CPU can tell me if my program is correct or not. This will actually save me more time in the long run than typesetting algorithms for papers. What is the magic command / arguments to perform this task?