Binary Representation - Is it something to rise above?

There is a tension in programming language design. Are we describing operations, or are we describing semantics?

I have a strong opinion that these are separate spheres - that representation should be either fully above or fully below the language's native level of abstraction - and that it is therefore hard for a language to do both well.

At one extreme, C, Pascal, Fortran, etc -- all those old classic statically compiled languages -- are primarily for describing machine operations, and specifically machine operations on bits and bytes. To the extent that they have type systems at all, the types each refer to a specific range of memory - bits and bytes - mapped directly to a template that specifies the bit-level encoding of each value.

At the other end, R4RS scheme - I think that was the last version that this mostly applied to. R4RS scheme was, aside from a few warts, a language in which caring how something was represented in bits and bytes just wasn't a relevant question. The programming model was a simple one in which things were values, or containers for values, and if you cared how a character or a number was represented in bits, then it could only be because you were making a mistake.

Numbers were numeric values, not a specific number of bits with a specific mapping of bit patterns to numbers. Characters were character values, not a specific number of bits with a specific mapping. They had no numeric values. In principle you didn't know, or even care, whether you were working on a machine were everything was represented in nine-trit "trytes" at the bottom level; there was just no reference to representation anywhere in the language's semantics, because the abstraction barrier was drawn above the level of representation.

I recently observed that C was a language for describing machine operations, and scheme a language for describing semantics, and this is what I meant. The C programming model makes no sense if types don't represent specific memory-aligned areas with specific bit-pattern-to-value mappings. R4RS scheme's programming model would have made no sense if they were. They operate at entirely different levels of abstraction.

Another topic recently asked what would be the characteristics of a good successor language to scheme, or of a future direction for scheme - and I think that representational abstraction barrier is an important property of any high-level language which ought to be preserved.

If you're not building a binary interface to something (like hardware or a communications channel that's specified in binary) and you're not working in a hardware-oriented language where those units and specific mappings of bit patterns to values ARE the sole underlying paradigm of your whole type system, then I submit that there is a deficiency in the design of the language.

Conversely, (and here R4RS scheme failed horribly) in such a language when you *are* building an interface to something, the patterns of bits you read or write should get interpreted according to a very specific mapping. The language that abstracts above representation issues for other operations, cannot be used to interface with anything unless the programmer specifies explicitly what mapping of bits to values is to be used for I/O. And this is because you're interfacing to things that *DON'T* have a common language for communication that abstracts above the level of representation, so you can't keep the abstraction barrier above that for the channel itself. A particularly egregious failure of this principle happened in a graphics library written in Scheme. Scheme provided no 'blob' type, and there was no way to specify a bits-to-values mapping on the I/O channels, so the implementor used strings to hold bytes, assuming that the I/O would be done in Ascii + Code page 1386. And of course the code exploded when used on an interface where reading and writing characters was by default done with a different code page or in UTF16.

So... in designing a programming language to operate above the representation abstraction barrier, I would absolutely ensure that bit and byte width of representations entered into the language semantics only when invoked specifically for I/O purposes. If the word "bit" came up in any chapter of the documentation other than the one about how to do I/O, I would be certain that I had made a mistake. If it were used referring to something the programmer actually needs to know to get useful work done, I would be certain that the mistake was egregious.

And this is why I am so completely certain that "uniform vectors" are a mistake in Scheme. They mean the programmer has to care about binary representation for some reason other than I/O.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Your contrast between C and Scheme is simply false to the facts

Characters have had a 1-1 mapping to a subset of exact integers since R2RS, just as they have in C. This total order must obey certain constraints (the upper-case characters must be in order, the lower-case characters must be in order, the digits must be in order, the digits may not be interleaved with either the upper- or the lower-case characters) that are exactly the same in R[2-5]RS and in C, and are still true in C today.

It is not and never has been the case in C that particular bit patterns are required. Until the creation of ISO/ANSI C, there were not even specific limits on the meanings of the types char, short, int, long, float, and double. Now the first four have lower bounds: char must be able to represent integers between -128 and 127, and so on, but there are no upper bounds provided. What is more, there are no guarantees about the endianness of these types.

Yup. I know some folks

Yup. I know some folks found it useful to map letters to numbers and vice versa. And I know that for convenience people insisted on there being a default mapping which happened to be the same number-to-letter mapping as ASCII. But that didn't happen until R5, when the process had largely been taken over by people who didn't see an abstraction barrier so much as they saw an inconvenience. They fixed the inconvenience (which mostly affected I/O) without preserving the abstraction.

Up to and including R4, there were only a very few constraints on the mapping: Digits had to be sequential from zero to nine, and letters had to be sequential from a to z and from A to Z. You might care about that for programming purposes if you wanted to map letters to a contiguous array range or something. Aside from that if you actually cared which number you got for which letter, as opposed to there being a number to map to each letter, then you weren't using them for any purpose related to their values as letters.

But, unlike some languages (like, for example, the one I contrasted it against) you can't add a number to a character and get a sensible answer. Which, you know, should be true in programming languages because it's true in the world.

Off-by-one errors

What you say is true of R4RS is also true of R5RS.

What you say happened in R5RS didn't happen until R6RS, reading "Unicode" for "ASCII".

R6RS insists that all implementations support the full Unicode character repertoire and that the mapping to integers uses Unicode codepoints. R7RS compromises: the mapping is Unicode codepoints, but only the ASCII repertoire needs to be supported. In R7RS, there may be characters excluded from strings in the manner of Common Lisp, but ASCII must not be excluded (but NUL may be excluded even so). Currently, all R7RS implementations support and allow all Unicode characters everywhere.

There are links to all the standards at the R7RS wiki's home page. Use the Text, Luke.

Algorithm efficiency and mathematical models.

There are some algorithms that can only be written on a random access machine model. Abstract machines come in all shapes and sizes, and to suppose the scheme abstract machine is better than the 'C' one because it cannot be physically realised seems odd to me. One abstraction is not 'above' the other, they are just different, like a stack machine is different from a register machine, yet they are all mathematical models.

Excuse me, but that's low.

I do not value the fact that it cannot be manufactured in hardware, and you know it. A claim like that, aside from being a personal denigration, is a rhetorical strawman. Surely you have more worthy things to do with your time than stoop to such tactics.

I care that the programmer should not have to be mentally translating semantic entities that have nothing to do with bits, to and from bits.

If writing a program means you have to think about the semantics of numeric overflow and underflow, and whether a given value will "fit" in a particular numeric type, for example, because of a bit width restriction that has nothing to do with the concept of numbers, doesn't that mean that your effort as a programmer is being wasted on crap that has nothing to do with the algorithm you're trying to implement?

If a language is allowing the programmer, to the extent possible, to not fuss with allocation and freeing by using automatic memory management, then there is no reason why it should not be allowing him, to the extent possible, to not fuss with stupid semantic constraints that arise solely from representation issues.

Number Theory and Encryption

Everything is a representation. Number theory does clever things with integers, and recent discoveries about primes talk about things that hold true in all bases. Binary is not just a representation, but a useful mathematical tool with some very nice properties, like all even numbers end in '0'. Even without computers binary is necessary for some things.

Specifying an abstract model using human language is hard to be precise, one advantage of a hardware representation is it functions as an accurate reproducible formal specification, a bit like the master 'kilogram' weight.

Many things like encryption require modulo arithmetic, and binary representations of both letters and numbers. Many algorithms take advantage of the specific properties of binary shift left and right like Egyptian multiplication.

However I would agree that modern languages should make available an infinite precision integer type (does scheme do that?) for when you don't care about performance and don't want to worry about overflow. If you provide binary words as a base type you can implement this as a library.

To the point....

I actually write cryptographic code. So I can say this with absolute certainty. To do encryption on anything other than a binary blob is a type error.

During encryption I am not using that blob for any purpose relating to its semantics as characters, strings, numbers, or anything else. I am using it for its semantics as a blob.

Now if you have non-blob data, and you want to encrypt, you first have to convert it into a blob by writing it using specific encodings, just like doing any I/O. And when you have a decrypted blob and you want to convert it into values of other types, you have to convert it to other data by reading the blob as specific encodings, again just like doing any I/O.

And modular arithmetic, I'll agree with you, is darned useful and there's no reason not to have modular arithmetic in a language where you're using numbers as numbers. There's no need for special syntax or semantics for the special case when the modulus happens to be two or an even power of two; that's a nice special case for an optimizer to pick out, but it's exactly the same semantics as any modulus operation and should be exactly the same syntax.

The Blob

I have no argument with the blob, although I would rather call it a byte array, as blob implies something opaque to me (maybe too much SQL on my part). For encryption a byte array with binary operations (and, or, xor, shift-left, shift-right, modulo addition, subtraction, multiplication etc...) is what you want.

I don't think mapping characters to the blob is I/O though, and it seems a bit strange you see it that way. In any case there needs to be a standard mapping between strings and byte-arrays, and I don't think I disagree with you there either. However a giant switch statement with every possible character and a numeric encoding does not seem at all maintainable as a way of doing this. In the end there are only byte-arrays, and what gives meaning to it is the graphics renderer that puts characters on the screen.

Where I start to differ is when it comes to binary operators. Boolean Algebra is its own type, and should have its own operators. I think you should provide binary codings of various lengths (8, 16, 32, 64, 128, 256+) bits and infinite precision integers that are opaque, and have no accessible representation. I don't see any need to provide other bases.

Take for example this nice PRNG I wrote in Ada:

function Next(
    Random : in out Random_Type;
    Size : in Unsigned_32
) return Unsigned_32 is
    S1 : Unsigned_64 := Random.S0;
    S0 : constant Unsigned_64 := Random.S1;
    S1 := S1 xor (Shift_Left(S1, 23));
    Random.S0 := S0;
    S1 := S1 xor S0 xor Shift_Right(S1, 17) xor Shift_Right(S0, 26);
    Random.S1 := S1;
    return Unsigned_32(Shift_Right((((S1 + S0) and 16#ffffffff#) * Unsigned_64(Size)), 32));
end Next;

How would you like to see this coded?

Edit: A further thought is, everything is numbers to a computer, the important things are the keyboard mapping, and the display mapping. Whichever way I think of it, a string is an array of symbols (be they Chinese or English). The Chinese used to try and develop keyboards for chinese characters, but my understanding is they now type the English transliteration, and then choose between characters, with something like predictive text making this easier than a keyboard with 3000 commonly used characters. The important thing would be the indivisible character. At some point these need to map to numbers, be they keycodes, or index lookup for the display vectors. I am not sure the programming language itself needs to define all the possible characters the system it is being used on can input or display. To me utf-8 or utf-16 seem the best solutions to these problems we have.

Blob = binary object. Opacity not implied.

I use "blob" to mean the data type that binary operations are valid on. Any time you're saying things like XOR and SHIFT and so on, you're using the semantics of a blob. Blobs have, for example, a specific width as numbers of bits; that's a concept which isn't relevant to the semantics of numbers.

Anyway, it looks like the only time you're using numeric semantics on the state variables at all is for a single addition and a single multiplication. Conversely, you're using at least half-a-dozen blob operations. Also the bit width is important to your algorithm and numeric values don't have such a concept - is '2' 8 bits wide, or 64 bits wide?. So the state variables are definitely 'blob64' rather than any kind of integer.

Given that, I'd be using value casts to access addition and multiplication, then casting the results of those operations back to blob64 so as to avoid a type error on storing them in the state variables, and returning the (numeric) result via another cast.

The cast operations, by my own preference, would have to specifically say 'IntUnsigned' to specify which conversion to use, rather than saying 'Int2sComplement' or 'Int1sComplement.' Otherwise you'd sometimes return (or do math with) integer values that aren't the ones you intended.

But that's my impulse to make assumptions visible; other programmers would prefer to make one of these conversions (Usually Int2sComplement) the default so they don't have to specify the one they most often want.

Wrapping Addition on Byte Arrays.

I still prefer byte-array to blob, but thats just a name. Maybe its because i'm an electrical engineer by training, but I see no problem defining modulo add and multiply operators on binary numbers, after all the add-gate is a three-input two-output logic gate. The PRNG relies on the overflow behaviour for correct operation, so the add must be an 'wrapping' add, and to cast to infinite-precision (unbounded integers) would not work. It also relies on all the operations being defined over particular word sizes, and the casting to different word sizes is necessary.

Rust has a very interesting approach and guards against overflows, so you have to specifically request wrapping behviour. There are two ways to achieve this in rust, declaring a 'wrapping' type, or using special 'wrapping' operators on non-wrapping types. The experience appears to be that using the 'wrapping' types results in wrapping propagation, and the program code getting littered with conversions back and forth and an overabundance of 'wrapping'. As such the preferred approach would appear to be to use a binary 'blob' type as you say, and use a special 'wrapping_add' or 'wrapping_multiply' operator, resulting in code like this:

trait Random {
    fn next(&mut self, n : u32) -> u32;

struct XorShiftPlus {
    s0 : u64,
    s1 : u64

impl Default for XorShiftPlus {
    fn default() -> XorShiftPlus {
        XorShiftPlus {
            s0 : 123456789362436069,
            s1 : 52128862988675123

impl Random for XorShiftPlus {
    fn next(&mut self, n : u32) -> u32 {
        let mut s1 = self.s0;
        let s0 = self.s1;
        s1 ^= s1 << 23;
        self.s0 = s0;
        s1 ^= s0 ^ (s1 >> 18) ^ (s0 >> 5);
        self.s1 = s1;
        return (((s0.wrapping_add(s1) & 0xffffffffu64).wrapping_mul(n as u64)) >> 32) as u32;

Now I quite like this, and have no objection to having wrapping_add as a different operator, so the semantics is clear. As rust guards against overflow, there is no problem overloading normal +, -, *, / for blob types either.

Overflow semantics behaviour

Overflow semantics behaviour should be distinguished by type IMO. Anything called "int" should be infinite precision or raise an error on overflow. Wrapping semantics is part of a "word" type, not an integer type.

Rust does overflow checks.

Rust normally checks arithmetic operators for overflow, and the compiler will issue a warning if it thinks they may overflow. Actual overflow is a runtime error.

You can have a 'wrapping' type but the casts get in the way of readability and function arguments start asking for wrapping-64-bit ints which may not make sense from the users perspective (the person using the PRNG may not be using wrapping arithmetic). As such the wrapping operators seem to make more sense. It is what Swift has chosen to do as well. I Tried writing this using Rust's wrapping types: Wrapping<u32> and Wrapping<u64> but it made the code harder to read than using the specific wrapping_ operators for me.

R6RS ftw, then

If you don't want to worry about "whether a given value will 'fit' in a particular numeric type", then you need to be writing to R6RS, which is the first and (so far) the only Scheme standard to mandate that implementations provide unlimited (except by storage) exact integers. And it also means that you can afford to ignore performance, because bignums and especially ratnums get inefficient fast.

Of course, you can use inexact numbers (just inserting a single dot into a Scheme program can sometimes improve its performance immensely), but whenever you are using floats, you are worrying about underflow, overflow, range, and precision. Even R6RS won't help you much there — it is technically sufficient if an implementation provides only 0.0, according to my reading of the spec.

(So far the R7RS-large committee has voted to require bignums, ratnums, and real and complex inexact numbers in conforming R7RS-large implementations, but not exact complex numbers. Still open is whether or not to require IEEE semantics.)


Unicode is complex and inconsistent because it encodes all of human languages and human languages are various and not consistent with each other.

Look, Unicode is NOT THAT BAD. It's not that scary. And getting and displaying input in everyone's language is wonderfully empowering ...

Yeah it has a couple megabytes of data and a few algorithms you have to include.

You can make an embedded version of your language without that.
But the version with the table will be the most popular version, because not everyone in the world speaks English.

As a totally unfair joke I'll remind you that CAR means "Contents of Address Register" and CDR means "Contents of Decrement Register".

Sigh. It's more than unicode, okay?

Actually I made this observation long ago, even before there was a Unicode standard, although Unicode is an excellent case in point. I thought about this pretty hard when I finally got that numbers were just numbers in the type system; not particular numbers of bytes and particular ranges and weird overflow semantics and so on. I thought about it pretty hard when I realized how different programming with automatic memory management was, and how code that dealt with values rather than representations hardly ever benefited from mutating anything.

Although I think Unicode is badly designed, there is great value in dealing with the issues it brings up. Unicode's poor design revealed exactly how and why we were wrong in our entire paradigm of handling characters and strings. We had been doing operations on characters and strings by operations that manipulated binary representations instead of by operations that worked on the semantics of characters and strings.

By failing to provide every last one of the representation crutches we had been using, and invoking every last thing that can possibly go wrong with conflating representation with string/character identity, Unicode requires us to think about what the hell we actually intend to do with values that are characters and strings in terms of their values as characters and strings. That is different from thinking about how we can achieve character and string results by manipulating their binary representation.

Unicode meant that doesn't work any more, and as a wake-up call it was long overdue. But it wasn't the first such wake-up call. And it won't be the last. It was just the one that applied the problems we already had in other types, to strings and characters.

Abstraction overflow, wraps back to representation

One idea I've found interesting in ultra high-level languages like Coq is (by example) this:

IEEE floating point number representations can be formally specified in Coq, along with their basic hardware-level arithmetic behaviors. We can then both reason about the specification and test the specification against the hardware behavior. Assuming both operate equivalently (to a high level of confidence that there are no hardware bugs) we are free to then leverage the floating point hardware unit for high-performance manipulations of a high level abstraction of a low-level representation.

This approach can feasibly be applied for high performance vector and matrix manipulations, GPGPU computing, FPGAs, future development of neural network chips, etc..

This is a direction I'd like to see us pursue in PL designs. Instead of having unspecified representations and manipulations thereof, we abstract known useful, hardware-friendly representations and functions then have a variety of hardware-specialized implementations for them. Conversely, hardware can be developed to optimize specific abstractions that appear promising based on a large body of available software.

On the language side, we don't necessarily need Coq, but we would need a way to reliably compile and specialize code to use the hardware-optimized implementations, and to get warnings or errors back if an expected specialization cannot be performed.


Are you suggesting that programmers would explicitly bind to IEEE floating point semantics? Or perhaps they bind directly to a Float abstraction and then later substitute an IEEEFloat?

The alternative I prefer is along the lines that you sketch, but with a representation selection step. Most programmers will write code that binds to Real -- real real numbers. All the usual algebraic laws apply to this semantics. Subsequently, guided by the programmer, a representation will be extracted that chooses a floating point implementation to approximate the semantics. All of these steps will be semantically defined in the context of a powerful logic for which properties can be proven, though I expect formal numerical analysis (with the IEEE standard or otherwise) will be a time-consuming and difficult undertaking without significant advances in theorem proving tech.

Real Reals

I am not sure, what are the semantics of 'real' Real numbers? If I divide 0/0 what do I get? IEEE fp is actually better specified with explicit INF and NAN values. Real numbers are just not well enough defined to be used in computation.

Usual Semantics

Semantics of reals are the usual ones. You can define them as a complete totally ordered field or construct them (e.g. Dedekind cuts). Mathematicians leave 0/0 undefined on purpose. For certain algorithms you can work with completed reals (e.g. Reals + infinity) and then extract a floating point approximation of that.

Real number semantics have an unfortunate feature....

Real-number semantics as represented on machines have the unfortunate feature of mostly being inexact.

Searching for a representation that is finite, yet gives exact results on the operations people actually do, as often as possible, leads to something like "numerator times a power of ten times a power of two, divided by denominator, and a word of flag bits. Three bits to track powers of e, three to track powers of pi, one to set if it's inexact, one to set if it's marked "bignum" (and therefore computations are constrained to produce exact results or errors, and exact results may require arbitrary-sized representations), one each for it being the result of trig functions applied to the represented value, etc. And then you double the whole thing when representing complex numbers.

Then you do as much symbolic algebra during bytecode compilation as you can, arranging operations to keep errors and approximations to a minimum, then you add a bunch of special cases like trigonometric identities that yield exact results and watching out for e to the product of pi and i and converting it to -1, etc.

Then you add code to silently coerce results to inexact whenever you make an approximation, unless all the operands are actually marked bignum, just because I prefer finite representations to halting programs, emit non-bignum(exact) results when operands are exact but at least one of them isn't a bignum, and emit bignum(exact) results when the operands actually are marked bignum.

This provides exact answers, even with non-bignum arguments, in many more cases than bignums/ratnums alone, makes even most exact-number representation finite enough to not explode in memory causing program halt, fixes it so inexact computations don't lose accuracy when it can be avoided. It also has the property that every inexact or non-bignum numeric value can also be represented as an exact or bignum numeric value so you don't lose accuracy when you convert, and exact->inexact and inexact->exact, at least in the case of non-bignums, are truly dual to each other. So generally, it tracks the semantics of real numbers a heck of a lot better than IEEE 754.

I actually did this. I considered it to be far more in the spirit of treating numbers as numeric values, doing calculations as accurately as possible, and returning exact results whenever possible, as recommended by R4 scheme. I understand where most implementors would prefer not to do it, because it's difficult and lots of people are in a great hurry to get wrong answers and wouldn't like it.

It was, IMO, a "Right Thing" and an exercise in excellence of realizing the GOOD properties of the "a number is a VALUE damnit not a representation" type system.

Which the scheme standardization committee proceeded to forbid. Because they wanted numbers to be representations, not values. They actually required WRONG results in some cases by demanding IEEE 754 conformance and disallowing more than that, and required the results of a lot of operations I was actually returning exact results for, to be inexact.

In the same spirit, I implemented *REAL* unicode handling. With character=grapheme cluster semantics, eager normalization, etc, because damnit I wanted to do Unicode right. It required implementing strings as trees with immutable leaves, which got me constant-time copy and logN time for most mutations.

And then again, the committee proceeded to forbid the Right Thing, because they wanted strings to be representations not values. They sat down and made character=codepoint semantics, which are wrong. Then they allowed invalid strings, non-normalized strings, and non-characters to be string and character values. In short they transferred all that bookkeeping, normalization, and effort to the language users when it could have been folded into the string functions and libraries to the users' benefit.

I implemented library procedures that took error continuations as arguments with return-to-this-if-invalid semantics, combined with no-such-thing values for all the types. Not-a-number, Not-a-boolean, Not-a-string, etc. Specifically to return in case of invalid operations, with conversion to "stale" no-such-thing values on storage in a variable, because you've got to be able to tell the difference between an operation that yields a "no such value", and a variable reference that had a no-such-value thing stored in it. I did this because scheme has continuations, and that is the correct way to handle errors with continuation-based programming. Instead the committee standardized throw/catch, which will generate ENDLESS amounts of hair while fighting with continuations.

And that was more or less when I gave up on the Scheme standardization process in disgust, and decided that what I was working on was no longer any kind of scheme and I would not be bound by their stupid standard.

Because their stupid standard was forbidding excellence by clinging to representations rather than values.

Kernel numbers?

Hm. How does the Kernel treatment of numbers compare/contrast to you criticism of Scheme numbers? (Chapter 12 in the R-1RK.)

Similar in spirit.

I didn't get around to implementing upper and lower bounds, but it's in the spirit of what I was doing.

It looks like a superior standards doc, in that it absolutely NEVER forbids a result more accurate than expected and absolutely NEVER forbids returning exact results when results can be known exactly.

Requiring arbitrary-length integers and rationals in all cases of computation on exact values gives scheme and lisp users a familiar numeric semantics. However, it also mandates crashes on attempting to run a lot of otherwise-valid programs. And users hit those program crashes with semantically sensible programs, accidentally and often.

It's avoidable by altering the program to explicitly make at least one of the numbers involved inexact, so it's not a huge issue and I wouldn't fight over it. Typically it crashes once or twice in development and the programmer goes 'doh!' and fixes it.

But any behavior that risks program crashes for semantically sensible programs is something I think users should have to invoke on purpose.

Users should be able to write programs that crash for that reason, of course, when they actively require either a crash or that kind of arbitrary exactness. That's what I was providing with the 'bignum' attribute on numbers being available regardless of magnitude or numeric value - 'bignum' is an attribute requiring either an exact bignum result (on ordinary algebraic operations anyway) or a crash. I figure that's sort of an unusual case, so I didn't make it the default.

You're entitled to your own opinions, but not your own facts

Which the scheme standardization committee proceeded to forbid. Because they wanted numbers to be representations, not values. They actually required WRONG results in some cases by demanding IEEE 754 conformance and disallowing more than that

No Scheme standard has ever required IEEE 754 semantics. There are a handful of R6RS procedures that convert between IEEE 754 representations (as byte vectors) and inexact numbers. Otherwise, you get some helpful hints about what to do if you do use IEEE 754 representations, and that's all.


I'm not sure i understand your approach. As far as I can see ALL symbol manipulation including mathematics and computing, is about representations and approximations. For example if you get half way through high school you run into irrational numbers, and exact representations of very specific kinds of subsets has to wait for Galois theory. About which time one really *knows* one is dealing with representations chosen to suit an application. There's no escape from this I fear.

If that's what they're counting on, yes.

Programmers whose code relies for its correctness on the specific ways in which IEEE floating point semantics differ from mathematical real-number semantics, yes, should specifically bind IEEE floating point semantics.

If your needs are not quite so specific, you should just be telling the system it's allowed to approximate and what quality of approximation you need. Let the compiler determine what hardware representation meets your program's requirements.


How would the definitions of the common operators be written so that programmers without a maths degree can understand them? What would the definition of addition on real numbers be for example? I think as soon as you start trying to define the semantics axiomatically or in terms of Dedekind cuts, you are going to lose 90% of programmers interest right away.

I think that representations are actually easier to understand, and you will have a hard time replacing a definition most programmers can reason about (the concrete semantics of floating-point) to one that most people cannot reason about (the abstract semantics of real numbers).


(I assume you meant to reply to me, not Ray.)

Most programmers don't understand the semantics of Float, so why should they need to understand the semantics of Real? Mostly programmers won't be able to tell a difference since without intervention you'll get pretty much the same code that you would have had with Float.

I should mention that this is still a ways out for me. I'm not going to implement this idea until the rest of the language is fairly polished.

I will have a hard time...

I did, sorry about getting it in the wrong place. Let's leave out most programers, I will have a hard time understanding the semantics of Reals. I think you can only define them as an axiomatic system in 2nd order logic, so that already means you cannot easily think about them.

Looking at it another way, I find it easy to reason about finite natural numbers defined in terms of Peano numbers, and how operations like addition work on them. I find the definition of infinite naturals to already be problematic as there is no 'representation' to think about and no operations with concrete definitions that manipulate those representations. Effectively it is no longer a turing machine but an axiomatic system.

For example how would you define addition on infinite naturals?

Think of them as dots on a line

I suspect you're not as uncomfortable with Real numbers as you're implying. Sure, you can think about finitary objects in certain ways that don't work for Reals (you can ask the computer to compute things exactly). I personally am most comfortable with numbers that I can count in unary on my fingers, but I can get by thinking about reals. Equational reasoning works well. I 'm guessing you may have taken classes on the subject back in school. :) Real numbers have much nicer equational properties than IEEE floats.

I'm not sure about what you mean by infinite naturals. Naturals aren't infinite, by definition. I'm guessing you mean how would I construct a non-standard model for arithmetic? I think that can actually be done in an ordinary way with an algebraic datatype.

Naturals not infinite

Well that's a relief, as I can cope with finite but unbounded naturals. What I am more concerned about is that I cannot write down any irrational numbers, they only occur as the output of generator functions.

I guess I don't understand how to write software to manipulate numbers that I don't have a representation for.

explicit bindings to representations

Yes, programmers would explicitly bind to IEEE floating point semantics. Or, at least, they'd have that option. Taking this idea together with generic programming, they could presumably bind code to a more generic 'Float' or 'Real' concept as you suggest and separate a representation from the algorithm that uses it.

I believe generic programming can be understood and discussed more or less independently from the approach I'm describing above, of providing high-level abstractions for hardware-optimized low level representations.

Agree and disagree

I see what you're saying, and I sort of agree with you. But I don't think the decision to use a representation system is completely orthogonal to an axiomization of floats.

A nice property of a representation system is that it's completely general. All of the operational semantics of your program arise from the selection of representations that will invoke the desired response from the hardware. All hardware behavior can be axiomatized to the extent desired and properties can be established in a strong logic.

Why should IEEE floating point semantics be treated differently than CPU instructions? I see it as the same problem. In either case you can target an intermediate abstraction (Float or Bytecode).

modeling CPU instructions

I do not intend to suggest that IEEE floats are a special case. I just used them as an example of a more general idea for which I lack a name. I don't see any problem with modeling CPU instructions in a similar manner. Doing so - together with a function to simulate execution - could provide a nice basis for explicit compilation (staging or JIT), and for simulations across machines.

I'm uncertain which two concepts you believe to be non-orthogonal in context. But generic programming (e.g. the ability to program against a generic 'float' or 'real' concept or typeclass, instead of being forced to rewrite the algorithm for each different FP model) seems a separate issue from what I was describing. Not saying it's an unimportant issue, either, just separate.

Which two concepts

I understood you to be suggesting that we expose IEEE floats to the language via a datatype with certain properties. I don't think that's a general enough concept to handle CPU instructions, which require choosing representations for functions and other underlying "fabric" of the language. But maybe it fits in just fine with what you had in mind.

not underlying

underlying "fabric" of the language

I'm not suggesting that floating points would be "underlying" in any sense. Nor would be the CPU instructions. If you ran the same algorithm using the IEEE FP model on a system without an FPU, it'd just run very slowly because it's simulating algorithms that programmers generally expect to operate on an FPU. That, or the compiler would deny your request, perhaps depending on annotations or static assertions.

For CPU instructions, the same approach effectively gives you a virtual machine - i.e. you set up some description of machine state and develop an 'execute' function (machine state → machine state). If our model of the virtual machine and its execution function aligns closely enough to our actual physical machine (perhaps more constrained, e.g. because pure functions are confined and deterministic) then we can potentially implement 'execute' in terms of trivially moving our program counter, perhaps with a lightweight wrapper. Thus, it would qualify as a 'hardware-friendly' abstraction of a virtual machine. A compiler could be extended to recognize this opportunity and specialize the implementation to leverage hardware instead of simulating/interpreting execution.

This doesn't necessarily give us any ability to specify how arbitrary functions are implemented. Modulo powerful reflection capabilities, we can only fully specify functions that are expressed in terms of our hardware-friendly CPU instructions and the simulation function. Fortunately, this is sufficient for a lot of interesting use-cases (JIT DSLs, for example).

The optimization is more or less happening at a higher level of abstraction than language primitive types and functions, not a lower level of abstraction. Overlaying, layering, not underlying, but nonetheless leveraging hardware acceleration.


I'm with you. But is this orthogonal to a system of representation extraction? Perhaps. I still think that if you have what you're proposing, you'll want representations. Otherwise, how are you going to generate that block of CPU instructions?


A language will have at least one primitive type, above which we 'represent' things. E.g. with just lambdas, we can represent lists and numbers and booleans, etc..

Assume for a hypothetical language that we have arbitrary-precision integers, pairs, unit, a simplistic sum type (e.g. `(a + b) = Left a | Right b`), and opaque functions (a → b). In this case, we'll still have "representations" for everything we do. E.g. a useful accumulation of statistics might be represented as `(count, (sum, sum-of-squares))`. An arbitrary-precision floating-point number might be represented as (mantissa, exponent).

Rather than CPU instructions, consider a simpler problem: how might you represent a byte? a machine word? How about an array of them?

Well, a machine word could be represented using an integer with range 0..2^K-1 for some K (usually 16, 32, or 64). We could combine this with functions that use modulo arithmetic, such that we know the output is another machine word. Then, when our compiler is applying and composing these functions on machine words, to arguments known to be machine words, we could optimize to use CPU-level machine word arithmetic.

An 'array', in absence of array primitive types, could be modeled instead by a 'list'. Really, the primary difference between an array and a list are its performance characteristics for indexed access. An array would give us better performance for functions like "copy the Kth element" or "update the Kth element" and perhaps "split at element K" - functions that work just fine on lists, but work faster on arrays. A compiler could be tuned to recognize functions on lists and to switch to an array representation heuristically. Or we could give the programmer control via annotations to insist or assert that a list be represented under the hood by an array.

A system for more formally specifying or parameterizing representations would likely improve our precision and robustness. But more ad-hoc, transparent, or dynamic approaches are also feasible.

Returning to your question: our CPU instructions might simply be a list (e.g. type `λa.μL.((a*L)+1)`) of small integers. It might transparently have a compact underlying representation. Or it might have a more formal, precise encoding in the presence of a representation system like you propose.

Representation selection

I think I'm using 'representation' a little differently than you, or at least have in mind a slightly different mechanism. My usage is generally that an implementation value represents a (often, but not necessarily, higher level) value in a given context. For example, an int32 might represent an integer in a context in which 32 bits is sufficient. Contexts can be complicated and require certain heap states. A function can be represented by a closure and a code pointer. etc.

My proposal for choosing representations is akin to a type system. We incrementally build up representation judgments until we have a suitable representation. As with fancy type systems, we can integrate an element of meta-programming to help drive this process and leave it under programmer control.

My point about building blocks of CPU instructions is that the mechanism by which you would normally want to do that is to extract a representation of a function using your existing code base. You don't want to have to re-implement a DSL usable version of every function you want to use. Scala has a DSL building pattern that kind of works the way you're describing but that also appears to me to have this downside.

I'm arguing that we ought to unify every implementation that our compiler produces into this framework. I think it's a fully general way of approaching compilation - all forms of optimization can be squeezed into this framework.

Good support for selection,

Good support for selection, specification, and refinement of representations is an interesting and involved subject. I think pursuing it, formalizing it, gaining precise and fine control of compiler output is a worthy idea. I also think it's unnecessary - in the overkill sense - for what I was describing earlier. But I'm not against overkill, and I'm not going to argue against your vision.

Aside: I tend to think "representation" vs. "type" is just a matter of perspective, i.e. whether we're looking up or down someone's ladder of abstraction. For example, the various Church encodings are representations of types like natural numbers, pairs, etc. - albeit, representations in a rather high level machine relative to modern computers. There are other representations we could have used (e.g. integers as a Church encoded list of Church encoded booleans). And lists of natural numbers can easily be used to represent other things, such as texts.

Representation vs. type

I originally wanted to use theories & types to model representational issues. We've discussed the idea elsewhere in this thread - you'd perhaps refine an abstract integer type with an int32. The problem I couldn't get past with that idea is that representation types don't faithfully refine the theory. A 32 bit integer type only encodes a little prefix of the integers. You can try to generalize to N-bit integers and then get back to the efficient 32 bit case as an optimization, but that's a mess and the general problem crops up all over the place.

Thus instead of declaring that the Integer type can be set equal to Int32 in some context (which it can't because the latter is a finite type and won't satisfy the axioms of Integer), we can define a representation map from one type (Int32) into another (Integer). We can then prove that in certain contexts, our representation of functions propagate representations correctly.

Note that "representation" is not an alternative to "type". Rather it's a relationship between two types. Representation selection needn't only be useful for the last leg of representing semantics on available hardware. You should be able to extract a representation and then deal with that represented value as an ordinary value.


I think studying PRNGs is good for a better understanding of binary rings. Without randomness computers would be very limited. The nature of relative primes and PRNG ring length is very interesting, and I think brings a new number theoretic appreciation of these constructs that is not based in hardware. Now in some sense word size is arbitrary, but powers of two make some sense. Not many systems use nybles any more (4 bits used to be used for binary code decimal) but larger word sizes are useful and in-between sizes not really necessary.

Edit: specifically linear feedback shift registers involve the binary representation, the theory of which is discussed here (groups and polynomial fields):

Abstraction vs Implementation

I strongly agree with this. The same can be said of Monads and functional programming. Monads can be build on top of pure functions to represent IO. You can type check this as a high level abstraction on top of pure functions, but if the monadic operations correspond to the hardware implementation, you can just compile them like you would a 'C' program. In other words we can have a duality, where the same program represents an abstract computation defined in terms of pure functions and proof theory, and yet is also directly implementable on hardware.

This is one of the things I am working on, a 'C' like language built on top of pure lambda calculus and a HM like type system, with clear semantics (and no undefined behaviour), that is type-checked like a pure functional language as monadic code, but once the assumptions of the compiler have been validated, is just compiled directly to an imperative machine code implementation.

Wouldn't like to use it

Given what I remember about interactive proof checkers, I don't think you really would like to use the IEEE floating point specification to verify software.

Someone should do this once, since it's very important to have a good formal spec for floats, but I am pretty sure it'll be a large complex spec, and it might take years to verify even simple algorithms.


I did not suggest use of the IEEE specification to verify software. Indeed, the only verification I even mentioned above was of hardware, gaining some confidence that the hardware FPU behaves according to spec (something a compiler or system might verify infrequently, akin to a memcheck).

The idea here is to program using only a few in-language abstractions (instead of second-class symbols like 'float'), then treat compilation to hardware as a form of acceleration of software models that use hardware-friendly abstractions.

With this, your language can have a bare minimum of primitive concepts, but you can still work towards high-performance computing - not by eliminating layers of abstraction, but instead by adding them, albeit with careful attention to hardware-friendly constructs that a compiler can recognize and optimize.

Sure. It's like exposition for cross-platform optimizers.

Sure. It's easy to define hardware binary addition in terms of blob semantics, for example:

(define blob-add (lambda (blob1 blob2)
   (if (zeroblob? blob2) (modulo (power 2 64) blob1)
       (blob-add blob1 (shiftl 1 (and blob1 blob2))))))

Definitions of operations on floating-point representations can be defined in exactly the same way. And it's good, for a couple of reasons. As you say, you get stuff out where you can actually reason about it, for real. Which opens up all sorts of ways for an optimizer to figure out which hardware a program can run best on and whether the massive parallelism of a GPU can be used.

Most of the time though - when thinking in terms of the semantics of numbers, as opposed to the semantics of bits - the question is not what representation and operations on that representation, the question is how much accuracy is required. It's worth it to me, for example, to explicitly tell the system I don't need more than 9-decimal accuracy, and then let the compiler work out which representation to use. On a platform where the fastest float operations are 64-bit doubles I'd hope for it to use that, whereas if there's a GPU with 32-bit floats available and the program is otherwise parallelizable, I'd hope for it to use that.

exposing hardware to code

On a platform where the fastest float operations are 64-bit doubles I'd hope for it to use that, whereas if there's a GPU with 32-bit floats available and the program is otherwise parallelizable, I'd hope for it to use that.

I'd rather provide (maybe parametrically) a module that tells me what my hardware provides, and be explicit about selecting one representation or the other.

In general, I'd prefer my language simplify the problems surrounding cross-platform simulation, regression testing, configurations management, and compilation. If hardware behavior, requirements, or assumptions are explicit in the program (or explicitly abstracted from it, e.g. via generic programming) then this is a better fit for my desired cross-platform programming experience than the sort of hopeful implicit adaptation you describe.

Explicit selection...

works fine if you know exactly which individual computer your software is going to run on. But if I'm writing code for the rest of the world to use, I'd rather say what the code's requirements are and then let the compilers map it onto all those thousands of different machines out there.

I got to really despise "our software now supports foo" meaning it would idiotically crash if you didn't have foo, regardless of the fact that you were doing nothing that actually used it.

re: explicit selection

Consider: what if instead of `int main(args)` you had `int main<machine-model>(args)`, i.e. presenting the machine model as a static parameter. Then our compiler can explicitly provide a machine model, and your code can use static assertions to document its own requirements. Everything is still explicit, but some decisions can be configured and managed at the compiler layer.

Such a machine model could include compiler recommendations (e.g. favor `machine-model::float_fast` instead of `machine-model::float32`). Conveniently, for securable programming, a machine model might also serve as a static capability, controlling access to side-effects (e.g. `machine-mode::stdin()`). And I don't believe this would hinder "writing code for the rest of the world to use".

I believe that explicit selection will ultimately offer the better programming and maintenance experience, after we account for configuration management, cross-device simulation and regression testing, formally documenting code requirements, etc..

model parameter

I am doing this now in Felix, only it's not machine model that's the parameter but operating system.

For example there's something like FileSystem[OS] which is a typeclass providing operations supported by most OS, defined in OS specific instances. If you want to do OS specific things like play with Unix permissions, you can use UnixFileSystem. FileSystem[Unix] implements most of the platform independent operations using that.

Unfortunately the programmer can chose the model on a per scope basis, so the machinery isn't sound in the sense that a command line switch could enforce the model .. but then maybe that's a good thing (thing about Cygwin which can do both Unix and Windows OS calls).

the incoherent position of R4RS

A language capable of expressing all computable functions can not be faithfully implemented on any hardware. All actual computation systems are finite.

R4RS acknowledges but does not gracefully resolve this contradiction. As an example, consider this rule, from R4RS about exact numbers:

Implementations are encouraged, but not required, to support exact integers and exact rationals of practically unlimited size and precision, and to implement the above procedures and the / procedure in such a way that they always return exact results when given exact arguments. If one of these procedures is unable to deliver an exact result when given exact arguments, then it may either report a violation of an implementation restriction or it may silently coerce its result to an inexact number. Such a coercion may cause an error later.

The upshot of this is that programmers working in R4RS are encouraged to write programs without regard to numeric size limitations, and to just accept that their program will fail in pretty random and certainly unpredictable ways if they cross a limit.

sophisticated programmers with sophisticated tools

Many scheme implementations base their bignum on GNU MP which is state of the art. If you make numbers big enough it may even switch to fft/convolution based multiplication.

If you are doing that sort of processing you probably know what sorts of limitations you face. And if your computation runs out of memory, it probably tells you.

re sophisticated programmers

I see that you're making claims about "sophisticated programmers" and "most implementations" so can I take that to mean that you agree the actual standard is incoherent in the way I described?

Incoherent, yeah, maybe.

It wasn't so much "incoherent" as it was "wimpy." As a consensus-driven document up through that point, it was carefully arranged to allow almost all of the different things that people actually did.

At the same time the editors hoped for greater uniformity in future implementations, so they scatter language around strongly recommending this and that, while not absolutely requiring something that would put many existing implementations out of conformance.

You can call it incoherent if you like, I guess; it encompassed many diametrically opposed implementation strategies.

reviewing the dichotomy (towards a synthesis)

Thesis: Computing systems are fundamentally signal processors, usually comprising a hierarchy of signal processors. The hierarchy ranges from circuit components to chips to machines to networks.

Most of the lowest level signals in the hierarchy are typically continuous signals such as voltage levels or the mechanical motion of a mouse.

In almost all of the hierarchy, but not at the very bottom, signals are typically discrete signals such as machine integers, bus signals, IP packets, keyboard codes, strings of ASCII text, IEEE floats, HTTP requests, abstract characters, exact integers, inexact reals, etc.

Note that a given physical signal in a computing system exists as more than one kind of logical signal. For example, a set of time-varying line levels is also a sequence of bytes is also an ASCII char, is also an abstract character .....

It is a material obligation of the designers of computing systems to harmonize those hierarchical meanings assigned to the signals in computing systems, at least within the operational conditions of the system itself. When designers fail to do this, the system exhibits bugs.

Computer programs are signals used to configure certain components of a computing system. Programming languages provide a syntax and associated semantics for programs of a certain sort. The semantics of a programming language, regardless of what formal or informal method is used to define those semantics, expresses a relation between two sets of signals, each presumed to be observable, which could be called input and output.

(Note: We are very thorough about this signals and systems view of programming. For example, in a low level language that happens to allow reading and writing bytes of memory at particular addresses, we regard the placing of the address for a read on a bus as an output signal, the byte so read and returned on the bus an input signal. In other words, by input and output we do not mean to limit attention to just the explicit "I/O facilities" of a language.)

Anthesis: Mathematics recognizes a class of mathematical objects that could be dubbed "computations". There are many ways to characterize this class. Generally speaking every computation begins from some starting point - a finite object drawn over countable domains. A computation comprises a finite sequence of transformation steps, replacing the initial state with some later state. Each step is self-evidently and axiomatically one that can be taken by mechanical means in finite time. In typical formalisms, two computations are treated as equivalent if they begin and end in the same state, even if they do not travel the same path to get there.

The sets of computable functions, of normalizable lambda expressions, and of turing machines that halt are isomorphic ways to characterize a very large set of computations.

If computations are allowed to assume a primitive computable operator that chooses, without restriction, an element of a countable set, then a larger set of computations is obtained (that includes computable functions as a subset). [Hewitt]

Some programming languages are notations for describing arbitrarilly chosen computable functions. The Scheme programming language of R4RS can be viewed this way because it imposes no a priori limitation on the size of (for example) integers or vectors. It does not even require a faithful implementation to have such a limitation. R4RS is a notation for expressing arbitrary computable functions.

(Two notes: 1. Obviously, any language capable of expressing all computable functions must also be capable of expressing some non-computations (e.g., programs that diverge). 2. Some programming languages, such as C, are less general. C is formally a regular expression language.)

Formally, in the first approximation, R4RS is a notation for describing computations that transform finite amounts of abstract input (e.g., the return value from READ) to finite amounts of output.

(Note: In a finer approximation, R4RS can also be regarded as a notation for describing a process, an example of which is a REPL, that is an endless repetition of reading input, processing it in finite time, and producing output, while possibly changing internal state along the way. For our purpose here, we can regard a process as a family of computable functions. Each computable function in the family takes as input its predecessor plus any new externally supplied input. Each function in the family produces as return value its output, and the next function in the family of functions that comprise the function.)

Sythesis: embrace the power of and.

I think that this dichotomy is why I have two favorite programming languages. Sometimes I have tasks that have to be accomplished below that barrier, and sometimes I have tasks that can be accomplished strictly above it.

One is for getting down to the bits and bytes and actually directing the machine in exactly how to do something - and having done so then testing to make absolutely sure it's NOT doing it in some other way.

What I say to the compiler (Yes, I talk to it: don't judge me...) You optimized that multiplication operation? No, you mustn't. That created a security-breaking side channel. You failed to overwrite the memory I told you to overwrite because the program never referred to it again? Baka! You made the information in that buffer visible to an attacker! You changed the address the variable name refers to and wrote a new value, rather than overwriting the old value? Fie on ye, the entire point of that write was to ensure that the old value is no longer anywhere in memory! Wait, what, you copied the entire stack, including sensitive information, because something three libraries away captured a continuation? Unacceptable! You cut this error check, and the error-handling branch that depends on it, because you can show that this error could only be true after invoking undefined behavior? And then generate code that does the exact undefined behavior I was checking for, while eliminating the check? No, sorry, you don't get to do that.

When it's extremely important to know exactly what the machine is doing and be absolutely sure that the exact sequence of machine operations you give is being slavishly followed, in order if necessary, and that nothing else is being done - That's tcc with all optimizations off. And if it still makes a security-poisonous but "semantics-preserving" program transformation, you just have to grit your teeth, curse, and drop to machine code. Because, yeah, it's all about machine operations.

Other tasks, I want to do in terms of semantics rather than in terms of machine operations. And in that case I don't want to have to care what machine operations happen to get my semantic requirements fulfilled and I want the most expressive language I can get for talking, not about bits and bytes, but about those semantics. And when that's the case I usually use a lisp.

synthesis and lisp

In my mind, lisp throughout its history has been an expression of a synthesis of the bottom up hierarchy of signal processors, and the top-down mathematical abstractions about computation in general.

The first famous lisp, 1.5, embodies this synthesis in a displayed implementation, right down to assembly code, and a meta-circular interpreter.

The lisps that became and diversified and then found Common ground on lisp machines -- they were a synthesis. You didn't need *two* favorite programming languages. You need one, lisp, in which the "level" boundaries were approximately as fuzzy or as bright-line as you needed them to be in the moment, depending on what you wanted to do. You could hack the system at all levels and multiple levels at ocne and yet still never leave lisp.

Initially, Scheme appeared among other things as just a restart on lisp that avoided the legacy of scoping bugs and that had a simple upward funargs answer. And by discovery, it admits easy strategies for interpreters and compilers.

So that was a fresh line on lisp. Early Scheme.

Somewhere around R4RS, when Scheme was well outside of the lisp community, that was lost.

Scheme as we have it today is not a lisp. Not by any stretch. Its some theoretical standard for people who fetishize a certain commodity form of compiler, and particular forms of academic publication.

I once beat my head against the wall trying to encourage the de facto owners of Scheme to return to the path of making it a nice lisp. I ought to have saved time and just stuck to the beating my head against a brick wall part.

Adherence to Bits is Symptomatic of a Slave Morality!

The balance of power has inverted. Everybody adheres to the power of the machine and gives in to it. The weak rule.

re adherence

Fight the power, man.


Saw Hegel/Fichte, therefore needed either Nietzsche or Marx.

re hehe

It has a kind of dialetical materialist sensibility, so it's already vaguely Marxian. The late RxRS series must be examples of the German ideology.

I'm only half German

The other half is Irish, which has no truck with materialism, dialectical or otherwise.

It isn't easy being green, black, red, and gold.

On Synthesis

On a more serious note. The thesis/antithesis/synthesis dialectic is due to Fichte and was a popular manner of writing of Hegel. It has subsequently been popularized as a strategy for organizing thoughts, or writing in the western world.

I think it is noteworthy that some modern philosophers don't see it as a very good dialectic. In lay man's terms (which is good since I am bad at philosophy), you end up with a synthesis which is a compromise but the original tensions remain.

So, it is entirely possible that there is no synthesis on this subject. Best you can do is compromise. That seems true to me.

re on Synthesis

The thesis/antithesis/synthesis dialectic is due to Fichte and was a popular manner of writing of Hegel.

Wait, do you actually think that's where it stopped? Marx did not kill off dialectic by introducing materialism. He showed how to make it not rubbishy.

Of course it didn't stop

No, of course not. Whether Fichte's dialectic is rubbish or not one can disagree on; I personally believe it's usually rubbish but people like Kolakowski or Zizek have different opinions on it.

Mao's On Contradiction is interesting here, though.

re of course it didn't stop

We have in society two material activities that are contradictory, one of which is the construction of a class of hierarchically arranged signal processing systems that we call "computing systems"; the other of which is the construction of theories of programming languages that attempt not to refer to the materiality of computing systems. That's the thesis and antithesis of the topic post: "binary representation" vs "something to rise above".

It makes no sense to say that there is no synthesis since, in some sense, you are soaking in it. The synthesis is the reality that joins PLT theory and practice. The material question is if we can figure out where that contradiction between theory and practice winds up as it plays out in future history.

I happen think it eventually plays out by throwing away the "something to rise above" view and that this will end a lot of today's academic PLT activity but I'm not certain how to prove it.

The machine is the arbiter

Personally I think you can only compromise, and different compromises give different solutions. Some solutions will be better than others. But the machine is, more or less, always the arbiter of that. So, yeah, it agree it is grounded in materialism somewhere though maybe different than you think.

I am not saying there is no synthesis, I am refuting that there is one synthesis, which the dialectic method should resolve to.

Compromise, not synthesis, therefor.

The Nietzschian view of history as a power struggle seems apt though. Industry vs academia, the slave versus the master, those who accept the machine as arbiter and those who do not.

But that's just entertainment. I don't think there is a synthesis. We will not agree on this.

material dialectics (re "the machine...")

Not to go too far down this rabbit hole but one more try:

I am not saying there is no synthesis,

It would be nonsensical, in material dialectics, to say there is no synthesis since in material dialectics, the synthesis is the actually existing circumstance (including its history and future).

In this case the "binary representation" and "something to rise above" are both analytic concepts of what is actually the case and what is actually case is by definition the synthesis of those concepts (to the extent that they faithfully describe the situation).

Yah. I don't do that

Because they, or Marx, saw it as a tool for some explanation or view of history which more or less progresses linearly, along one line. I assume the world is much more messy, many contradictions, many histories, many outcomes.

re Yah

Because they, or Marx, saw it as a tool for some explanation or view of history which more or less progresses linearly, along one line.

As far as I know that is a complete fabrication. Do you have a citation?


No. That's my view on Marx. Too simplistic a scheme applied to a too simplistic (linear, eurocentric) view of history.

I have my own opinions too. But it's a long time ago since I read, glanced at, Marx. And thousands have published on the subject.

re marx

But it's a long time ago since I read, glanced at, Marx.


There are no enemies anywhere / There are no friends anywhere

Actually, you have no such assurances. For all you know, your stream of carefully controlled machine instructions is being JIT-compiled into something completely different by the firmware on your chip — and an execution trace forwarded to the NSA by signals superimposed on the power line. Or something. If you want full control, get a PDP-11/45.

I don't really feel the distinction

In my mind, even bits and words are idealized objects - they just happen to be idealized objects that are perfectly realized in hardware.

Programming languages that deal directly with these ideals are efficient because they have no impedance mismatch with the hardware. That isn't a bad thing.

The question is, are you happy with those idealized objects or do you have others in mind that you'd rather work with?

re feel the distinction

In my mind, even bits and words are idealized objects - they just happen to be idealized objects that are perfectly realized in hardware.

Programming languages that deal directly with these ideals are efficient because they have no impedance mismatch with the hardware. That isn't a bad thing.

I see two odd claims there: "perfectly realized in hardware" and "no impedance mismatch with the hardware". The absolutes "perfectly" and "no" seem false.

A classic work that might be useful is A Mathematical Theory of Communication, C. E. Shannon.

Certainly, by definition, bits and words are idealized objects.

The programming languages people generally discuss the most deal exclusively with discrete signals, sure, but there is no reason not to contemplate languages over continuous signals (and arguably, some exist).

The question is, are you happy with those idealized objects [bits and words] or do you have others in mind that you'd rather work with?

Do you mean like ternary logic circuits and words of those?

Or some higher level abstraction that reduces to some form of discrete signal directly realizable at the lowest levels?