Hungarian Notation vs The Right Thing

I've just read one of Joel Spolsky's usual rants, this one on the joys and virtues of Hungarian notation. You can take the boy out of Microsoft …

His example is avoiding malicious user input on a web form by ensuring all unsafe input is always encoded before use. He suggests two coding conventions before settling on Hungarian Notation as the Right Thing.

Now I expect LtU readers (certainly this one) will be wondering how a flakey substitute for a type system ended up being Right Thing instead of the Real Thing we know and love (and at this point everyone should briefly murmur a prayer over their copy of TAPL). But any regular reader of Joel will know he is thoroughly pragmatic (or anti-academic, depending on your mood) and his solution fits his mindframe. So, leaving aside ranting and wailing about Joel's lack of PL education (because, frankly, I'm doing enough for everyone) let's talk instead about how one could change his mind. Specifically, implementing Hungarian is a few days of drudge-work. You're never going to get Joel to sit down with SICP, EOPL or PLAI, and TAPL, to get the necessary background to implement his own statically typed language, and if you tried he would rightly tell you it would take too long to learn. Instead, could you deliver a statically typed variant of Javascript that would catch these errors in a similar time period? If so, what tools would you use? What type systems? How practical can we make our theory?

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Tulsa Association of Petroleum Landmen?


(I know, Types and Programming Languages, but still.)

Get In The 'Hood

Everyone community has its secret decoder rings. SICP, EOPL, CTM, PLAI, and TAPL just happen to be ours.

Bad practice

Sometimes it is a pragmatic choice to use abbreviations. Often, it is just a manner of showing some superficial knowledge. No offense meant but the latter bothers me a lot, it's so silly, grown-ups should grow out of this kind of stuff. Fortunately, a lot do.

Compression factor

The compression factor for these particular abbreviations ranges from 8 to 18, so I think pragmatism plays a bit part here. That said, we need automatic wiki-like hyperlinking of such common abbreviations. Feel free to watch this space, but without holding your breath.


That would help a lot. For the moment, I personally would prefer that people use the academic practice of using abrreviations only when properly introduced in a discussion. At least as a courtesy to new-comers.

Just my opinion (JMO), everyone feel free to disagree (EFFTD).

low tech solution

Perhaps a list of abbreviations commonly used (glossary?) accessible via the FAQ link (or a new link in the left menu) might help in the meantime?


What about an ActionScript variant?

See LtU comments on MTASC and MotionTypes

The Obvious Answer

That's the obvious answer, but since neither you or I are Nicolas, and he hasn't released the code, it isn't a proposal either of us could put to Joel. I'm really trying to get people to say how they'd put the theory into practice, rather than pointing to some existing work. Do we have the tools to easily do so, or do they still need development?

My tentative answer is that I'd use Scheme with Kanren for type inference. However I don't think I could deliver a working solution within a week. Partly due to my lack of familiarity with implementing type inferencing, and partly due to tools issues (getting Kanren to work on a system that isn't Gambit was a pain last time I tried). So my conclusion is that the tools aren't quite their yet. Or maybe I'm just a lamer. :-)

Seems to me that tools are there

In a rather large laboratory I know the default time for developing a language and toolset is at least one year; maybe it just isn't that easy? I don't think the complexity of implementing languages has a lot to do with tooling - although these can always be improved.

Reading his "rant," I don't think he actually needs or wants a new language - most of it is a case for using a particular coding convention. So, maybe the question should be, in what language his "coherence rules" could be implemented? My best bet would be C# with attributes and reflection.

[I guess there are more, like Lisp, but I don't know that language too well.]

It Should Be Easier

I assume you're in an academic environment, where the premium is on making something new. I'm not concerned with novelty, but rather assessing the state of maturity of our theory and tools. I believe MotionTypes is a constructive proof that theory is sufficiently advanced to make a language akin to Javascript without incurring a PhD. So my question remains: can we do it cost effectively?

I don't know

I assume you're in an academic environment, where the premium is on making something new.

Mostly wrong on both accounts ;-).

But seriously, I don't know how to answer your question; I wouldn't even know where to start. Is it after half a century of research in programming language theory, a library of language books, miles of language articles, (ten?/hundred?-) thousands of (open-source) programming languages, and thousands of tools easy to develop a typed simple language. Who knows?


This seems like something that could be addressed by applying the lessons from Units: Cool Modules for HOT Languages, which does a nice job of addressing both manifest and latent typing in the target language.


I read this paper a few years ago, and really enjoyed it. A few months later and an office move and I couldn't find it again... Which reminds me...

If you're using Java, a simpl

If you're using Java, a simple solution would be wrap all functions that allow you to access user input with functions which return a given class representing unsafe input. Type checker finds errors. It may be possible to avoid importing the unsafe functions' names into the scope of your programme, in which case once again, a programme that compiles cannot break this requirement.

For extra credit, we can wrap output such that it takes only safe strings, which are distinct from ordinary strings. Then we can bind our hands to make sure that we use appropriate conversions to change our strings into safe strings. In this case, using a wrong conversion might give you garbage, but not insecure garbage, as long as you write appropriate conversion functions.

Of course, this looks like a grotesque parody of a monad, which it is. Learn Haskell you Microsoft hippies. In general, what Spolsky is talking about is making latent type information explicit, and the most direct way is to reify that into different types.

Manual Type Checking

I had the same reaction when I read Joel's article. He's just advocating a sort of "manual" type checking. His proposed system already requires rewriting the names of many library functions, it wouldn't be that much harder to instead more specifically annotate the functions' expected input and return types (as per Marcin's suggestion). The advantage is the guarantee of correctness; while some people (the JUnit community) really like guarantees of correctness, I don't know how strongly Joel, as a "bugz" kind of guy, feels about such guarantees. I think many people are turned off from guaranteed safety by the perceived investment involved.

To be fair, maybe automatic type checking also encourages complacency. It hasn't for me.

Hungering for a real typesystem

In Joel's defense, I thought one of his main points related to Hungarian notation was that in its original "Apps Hungarian" incarnation it was supposed to be used to augment the static type system in the language being used, by explicitly identifying the latent types of variables, by means of their prefix.

The latent types I'm referring to aren't the same as the static types the language already understands. At least, they shouldn't be - that would be an erroneous application of Hungarian notation ("Systems Hungarian"), which Joel criticized. Instead, the latent types in question are more specific to the application, and they aren't encoded as automatically checkable static types because there's no good or pratical way to do that in the language being used.

For example, aliasing the int type in C using typedef doesn't buy you much, and wrapping it in a struct may have consequences you don't want to deal with. So instead, distinguish between e.g. row variables and column variables in Excel's source code with "rw" and "col" prefixes. This is done in many programs anyway, just not always in the specific Hungarian form.

In a latently typed language like Javascript or the server-side VBScript pseudocode which Joel seems to be using, this still seems like a reasonable solution. Retrofitting a static type system for this purpose seems like a heavy solution. I'd be more inclined just to use a wrapper function to perform output, and design it only to output objects of a certain type. That way you'd at least get a runtime error if you tried to output an unprocessed string. To be able to catch that statically, you might still want to use a variable naming convention, though.

The "row" and "column" exampl

The "row" and "column" examples remind me of another dimension of "type" that I suppose must have already been considered and written up somewhere, although I've never been exposed to it: the notion of units in hard science. The notion in science that if you do some math, the units of the result provide a check that the computation was performed correctly seems closely related to typechecking.

We could represent units in this sense by "subclassing" or in other ways borrowing the behavior of a real number type, but some operations on unit-based numbers work differently; addition is unit-preserving but multiplication changes the units. As well, it seems that if you had to explicitly encode units in your type system, you'd be forced to write a lot of explicit type conversion ("meters multiplied by seconds produces meter-seconds").

Perhaps there is a parametric-type way of approaching this--one could have a parametric type 'meter' which would wrap any other unit-based type, thus creating 'meter-seconds' and such. I'm not clear how you'd support reciprocal units in such a system, or that cancellation could work properly.

Pragmatically, it seems it would still be best implemented as special syntax in the language, rather than hacking it into the type system. But even so, the question is this really a "different" notion of type or one that the traditional notion includes?

Already been done

Here's an example that I did in Haskell. It's also been done in C++ using integer template arguments rather than constructing Peano numbers.

Unfortunately, this approach

Unfortunately, this approach only solves the specific problem of SI physics units, not the general problem of dimensional quantities. You apparently are forced to implement any combinable units into a single explicit representative structure that knows about all the units that might be combinable. There are other things than physics units.

For example, if I now come along and simulate "human beings" who have "hit points" and those hit points heal over time with dimensions of "hitpoints/second", I have to go make another one of these systems, or add 'hitpoints' in as yet another dimension to the existing system.

Someone can't come along and introduce a new dimension without changing the existing code. Thus this seems entirely to be "hacking it into the type system", rather than saying this is something "naturally expressible" in the type system. I suppose it's possible a language could exist that would syntactic-sugar-ify the problem, but I can't tell.

For a less contrived case (although, as a game programmer, the above case is not at all contrived to me), consider the originally mentioned "rows" and "columns". In a system that measures screen units in terms of character-widths and character-heights--which are different in pixel- or screen-size--it would be reasonable to have "rows" and "column" units, which are dimensional. rows and columns are not intraconvertible; rows/columns or columns/rows are meaningless, although rows/rows or columns/columns are just dimensionless numbers; perhaps there is an explicit conversion between rows and columns for the sake of drawing diagonals and circles. Again, animated movement allows for "rows/second", and further derivatives w.r.t. time are meaningful for animation.

So while a system for hacking together one specific set of units may be a nice demonstration of the power of parametric types in this way, it feels more to me like saying "look, this system is turing complete", even though you'd never want to program it. It can be forced into the system, but only in an entirely clumsy way that is a mismatch for how the rest of the language works. (See previous comments about type-system vs. syntactic sugar, though.)

In fact, for this and the C++ implementation below, the focus here is on _dimensional_ quantities, not quantities with _units_. Dimensional analysis is great, but unit analysis is superior, since it distinguishes meters from feet (say). Clearly with dimensional types, you can use one canonical set of units, and even convert between the units on the fly to/from the canonical ones as requested, but I'm suspicious given that the code doesn't even mention, say, "meters", and simply says length, apparently assuming that the choice of length units is left to... convention! Shall we use hungarian notation to distinguish meters from feet, then? Or is the code just sloppily written in terms of naming conventions? Or am I missing something?

rows per column

rows/columns or columns/rows are meaningless

Hmm. My terminal seems to have 80 columns per row, and 25 rows/column. And a diagonal from the lower left to upper right corner has a slope of 0.3125 rows/column.

picky, picky...

Not to be picky (okay, well, yes, *being* picky)...

you have 80 units-width per row, and 25 units-height per column...

Thus, you have that the slope has a tangent of:

d = diagonal
a = height axis
b = width axis

slope = tan(\alpha) = sin(\alpha)/cos(\alpha) =

= (a/d)/(b/d) = a/b = ( 25 units-height * K units-length/units-height ) *
* 1/( 80 units-width * J units-length/units-width) =

= 25/80 * K/J ,, K and J are dimensional factor constants...

Therefore, even if axis units are not the same, the slope is dimension-less (even when, per se, it *is* a type, it is a slope). After all it is the tangent of an angle, which are dimensionless, even when measured in radians (definitely *not* typeless! enough trigonometry at school to forget *that*...)

I see that you have to stop somewhere... 80 characters wide and 25 characters high... those are the types, call it length, which is the dimension... not the same, indeed, but then, Physics has the real apples and has the real meter ;-)

anyway, the Haskell examples given above are pretty cool, I wish monads were easier to grok...

best regards,

Not buying it

Therefore, even if axis units are not the same, the slope is dimension-less

If you have a plot of position (say, in meters) and time (seconds), the slope of the curve at any point is the instantaneous velocity and has the units of meters/second.

Okay, (and I thought I was pi

Okay, (and I thought I was picky)

the slopes are not *always* dimensionless, but they were in the example given, I wasn't trying to generalize that trigonometry result to differential calculus...

on the other hand, in your example (and in general in Physics), the arrow, graph, etc, are just "gedanken" representations of the real physical entities... Velocity is not the slope of anything, it's the variation of the position when time goes by.

SIDEBAR: Actually, in a sense, instantaneous velocity does not exist, because you need at least a bit of time to measure any position variation, and differential time or differential space do not exist (quantum limitations due to Plank's Constant being (slightly) over 0). However we can usually overlook those limitations, and apply the abstractions right away (close to the quantum limits scales, however, you cannot ignore them, as the XX Century drammatically showed us).

It just happens that there are certain mathematical constructs that allow us to have a better understanding of them, and velocity just happens to be the slope of a line segment tangent to the representation of the position and time on a graph. The slope, per se, is dimensionless (again, because it's the tangent of an angle), but its value is given the velocity meaning: you can graph velocity against time, and those slopes become dots on a paper.

angles are just that, angles, and dimensionless by definition. dimension in a vector is carried by its modulus, not the angles with the (hopefully orthonormal) base you are using to define/measure/signal the vector space.

Anyway, i wasn't trying to be too-picky on my previous reply, just demonstrating that rows/columns or columns/rows are meaningless, as a previous poster said... Think in a radar screen, you don't have them, however you can graph and write things on it (planes, for example...) Cols/rows are just an easy cartesian abstraction, most human direct experiences are naturally easier to express in polar/spherical or cylindrical coordinates, however the experiences are the same, een when the perception we have of them is different... (ok, in a way, they are not the same experiences, but let's not get **that** picky...)

best regards,

The slope might represent a v

The slope might represent a value with the units of meters/second, but it is still dimension-less.

Actually, I think you were pi

Actually, I think you were picky in entirely the wrong direction.

The person you replied to was confused, though. "80 columns per row" and "25 rows per column" aren't the dimensional quantities he was thinking, because he's confused an aspect of the English language. There are actually 80 columns per screen and 25 rows per screen. Calling these "rows per column" or "columns per row" is entirely misleading. Instead I'd say "screen_width = 80 width-units", "screen_height = 25 height-units". You could certainly consider "80 width-units per height-unit", but that actually describes an incredibly gentle slope--one which travels 80 columns while only climbing one row. (See below for an example of a more reasonable slope number.)

I wasn't going to comment, but, in fact, slopes for programmatic purposes should not be dimensionless. You're thinking of an entirely different kind of slope, one that is irrelevant. If I want to plot a point on a line, and those points must be expressed in units-height and units-width, then I want to manipulate a slope that is in those same units, not a dimensionless slope.

You might want to talk about a dimensional screen space metric (actual physical space on the screen), but it's not going to be a correct slope for programming purposes--although it surely is the true slope that you would want to print numerically at a human who wanted to know the slope of the line, or if, say, the human as programmer has a real-world desired slope to draw. But the program itself can't use it.

Let's say your screen is a normal 4:3 screen. Let's arbitrarily define units-length to be 1 screen width. Then there are 80 width-units in 1 length-unit, and 25 height-units in 3/4 of a unit, so 33.3 height-units in 1 length unit.

If I want to draw a nice 45-degree diagonal in my programming language--that, is a line with viewspace slope 1--then I want to draw a line that is 1 length-unit / 1 length-unit in slope. But programmatically, I need to move a point around in width-units and height-units, so I need the slope to reflect that. We apply the above conversions (this is the inverse of the conversion you made); so if J' = 80 width-units per length-unit, and K' = 33.3 height-units per length-unit, I want to plot my line with slope 1 viewspace as slope (1 width unit)/(1 width unit) = (1 length-unit * K') / (1 length-unit * J') = K'/J' = 100/3/80 = 5/12 height-units per width-unit.

This says that to draw a true 45-degree diagonal, I need to move my point by 5 vertically for every 12 I move horizontally.

The unit checking is necessary, and saves me from doing something unintentional, like using the inverse of the slope from what I want.

In fact--and this is really the point of the analogy to Hungarian--even if my width and height units are identical, maybe I _do_ want to use the above style of logic (with them as different units, and a dimensional slope) so that I get the typechecking to avoid accidentally use one in the wrong context.

Of course, a slope I feed to atan() needs to be dimensionless, and your original comments apply to that case. (Although you've picked the strangest way of defining the slope via trigonometry as possible.)


I am not sure your exposition sounds convincent enough to satisfy my pickyness... But it doesn't sound wrong, either.

(Although you've picked the strangest way of defining
> the slope via trigonometry as possible.)

I guess it's a matter of school... you get taught in different ways depending on your teachers... :-)


C++ implementation of dimensions

In the documentation for the Boost MPL library, there is an example of how to implement dimensional analysis in C++ at compile time.

Mikael 'Zayenz' Lagerkvist

Object-oriented units of measurement

I think Object-oriented units of measurement (presented at OOPSLA last year by Eric Allen et al.) is an interesting paper on this subject.

From the abstract:

[...] We show how to formulate dimensions and units 
as classes in a nominally typed object-oriented language 
through the use of statically typed metaclasses. 
[...] We also show how to encapsulate most of the
“magic machinery” that handles the algebraic nature of
dimensions and units in a single metaclass that allows
us to treat select static types as generators of a
free abelian group. 

Previously, on LtU...

You might find some inspiration from a previous discussion about types and units on LtU (or at least a few hyperlinks).


aliasing the int type in C using typedef doesn't buy you much, and wrapping it in a struct may have consequences you don't want to deal with.

Even if you are going to usea 3rd generation imperative language, you can choose one with a decent type system.

In Ada you can write:

type Row is new Integer with range 1..25;
type Col is new Integer with range 1..80;

and the compiler will make sure you don't mix the two, unless you explicitly say you want to.

The talk security is a red herring

The problem is that dynamic strings have to be encoded in HTML. This effects all strings, including the text of a math paper, and Java source.

If the problem is the requirement for encoding HTML output then the natural thing is to have a mechanism for creating HTML output where all dynamic strings are automatically encoded as they go out written to the page. Tags are sent to the page using another method.

To know the write way to do this with types depends more deeply on how you want to express UI code.

Type synonyms considered harmful

The trouble with Joel's example is that he is using one type (string) for two things which are logically distinct.

In Haskell, this is what I'd do:

newtype UnsafeString = UnsafeString String
newtype SafeString = SafeString String

request :: String -> IO UnsafeString

htmlEncode :: UnsafeString -> SafeString

write :: SafeString -> IO ()

-- And, just in case you need it...
promiseSafeString :: UnsafeString -> SafeString
Problem solved. You never write an unsafe string without the compiler complaining about it, unless you explicitly introduce a possibly unsafe conversion. All sufficiently strongly-typed languages have an equivalent idiom. Even in C++, I can write this:
template<class Tag, typename T>
class ValueWrapper
    typedef T value_type;
    typedef ValueWrapper<Tag,T> wrapper_type;
    typedef typename boost::call_traits<T>::reference reference;
    typedef typename boost::call_traits<T>::const_reference const_reference;

    reference unwrap();

    const_reference unwrap() const;

    explicit ValueWrapper(typename boost::call_traits<T>::param_type p_value);

    T m_value;

struct unsafe_string_tag {};
typedef ValueWrapper<unsafe_string_tag,std::string> UnsafeString;

struct safe_string_tag {};
typedef ValueWrapper<safe_string_tag,std::string> SafeString;
And now the compiler will prevent me from mixing safe and unsafe strings. It's less elegant than the Haskell version, admittedly, but you only have to write ValueWrapper once, and after that it's two lines per safe type synonym instead of Haskell's one.

Missing the point?

Joel is not talking about a type system. He's talking about how the original Hungarian notation differs from what is currently employed. In the original Hungarian notation, one was specifying how a variable is to be used. As a crude example, consider temperature and length:

result_temp = air_temp + widget_length;

Right off the bat, you see that there's a problem. Further, you see there is a problem without having to check another portion of the code to see how those variables should be used. Those both might be floats and the compiler will cheerfully add them but it's nonsensical and Hungarian notation as it was originally conceived would allow one to instantly see this. It was about how data were to be used, not how they were typed.


Ovid: It was about how data were to be used, not how they were typed.

But types are precisely about how data are to be used! Elsewhere in the thread are perfectly fine examples in Haskell and C++ of how a separation of operations can be accomplished even when the underlying representation is the same, and my own response recommending the investigation of a module system for JavaScript based on "Units: Cool Modules for HOT Languages" was motivated by the same thing being done as a common example for ML-style module systems, e.g. defining a "currency" functor that can be instantiated for "dollars" and "euros," both of which are represented internally by floats, but whose operations are compile-time incompatible with respect to each other (no adding dollars and euros without conversion through the exchange rate).

Having said that, I appreciated Joel's point that App Hungarian degenerated into Systems Hungarian, and can see how App Hungarian is useful when you're programming in a language with a type system weaker than, say, C++'s.

I'm not convinced

But types are precisely about how data are to be used!

That's true, but it comes down to whether you want to have all these types for every little thing. Acceleration and spices are obviously two different things, but it's less clear that "font height" and "font width" need to be distinctly separate at the data type level.

And he actually uses it as an automated typesystem!

In the ASP->PHP translator his company has built. From
Remember the problem of how to translate a(1), which could mean "look up the 2nd element of array a" or "call the default method of the object a passing the argument 1" depending on what type a contains at runtime? This really matters, because we use arrays, and because we use the built-in class RecordSet all over the place, doing things like rs(1) which is short for rs.Item(1).Value, and since VBScript is latebound there is no way to know what code to generate in PHP until runtime, and that's too late! The only correct thing to do in PHP would be to generate code that checks the type of a, and decides, at runtime, whether to do an array lookup or a method call. This is messy and slow and would suck big rocks in the kinds of tight loops where you tend to be using arrays.

How did we fix it? Well, thanks to Hungarian notation, so callously dissed by developers who do would not recognize a superb coding convention if it walked up to them on the Shanghai Maglev train and shook their pants leg, every recordset at Fog Creek starts with the letters "rs". And Thistle looks for the rs and say, "ah, this is a recordset, you're not looking for an array value, you're calling the default method," and generates fast code. Based on your age you will either call this an evil hack (if you're young) or an elegant hack (if you're old); in either case it's a huge optimization made possible by the fact that Thistle only has one program to compile. Outside of Fog Creek it wouldn't work. All hail Hungarian notation!

Also, Hungarian isn't very far from the Lisp '-p' convention or the Scheme/Ruby use of '?' and '!'.

wow. just wow.

Well, thanks to Hungarian notation, so callously dissed by developers who do would not recognize a superb coding convention...

He sounds a little extra-defensive, like a man maligned by serious scientists who don't recognize the greatness of his insights.

Creek starts with the letters "rs". And Thistle looks for the rs and say, "ah, this is a recordset, you're not looking for an array value, you're calling the default method," and generates fast code. Based on your age you will either call this an evil hack (if you're young) or an elegant hack (if you're old)...

Definitely evil, but I guess this betrays my age. "My rsThingamajig object isn't a 'record set' it's a 'registered signal'!" This is not a substitute for a type system.


He's defensive because the people on his forums start an anti-hungarian thread every once in a while -- this article was written before the "defense of hungarian" one.

What's in a name

Personally I think Joel misses the boat in drawing an arbitrary distinction between Systems and Application notation. At a very base level of abstraction, those System notations are synonomous with Application notation. As you move up the level of abstraction, these notations that are important at the base level become unimportant. The same happens with his Apps notation - as we move up the abstraction layer, it's totally unimportant whether units are in relative or absolute, or whether the access is via a RecordSet.

And that's the usual problem with Hungarian rules, they don't seem to respect abstraction boundaries. Data abstrations start to bleed information about the internal representation of the type (be it primitives such as int, dbl, string, or moving up the layers in absolute and relative coordinates). This information may be important at it's level of abstraction, but if you don't hide this representation at the boundaries, then you get leaky abstractions.

More generally, Hungarian notation is an attempt to enforce rules about good naming conventions. The problem is that naming things properly is a large part of the battle in making code readable. There are no shortcuts to be had in communicating logical thought processes. It also has the problem of not promoting what should be promoted - that is data abstraction and encapsulation.

Cultural Issue

Chris Rathman: It also has the problem of not promoting what should be promoted - that is data abstraction and encapsulation.

Well, this is the same organization that littered public data members all over its "object-oriented" application framework. I'm sorry, but I've never been impressed with Microsoft as a software engineering organization. As a very clever mondo-space-and-or-time-optimized hack organization, yes. Unfortunately, their philosophy leaks out into the minds of people who've only ever been WIndows developers: working with code developed by people with no Mac or UNIX experience is, without exception in my experience, an excruciating ordeal.

Basic solution

If we never want to use Request without encoding the data, why not define a function that returns the data for an encoded request? Perhaps calling it EncodedRequest will help. Then make sure Request is never called unless there is a specific need.

It seems pretty obvious and yet no one has suggested it so maybe I'm missing something.

Spolsky covered this in his a

Spolsky covered this in his article. Briefly, the point is that by examining each individual statement it must be possible to decide its correctness using only local information. This is a process which, by modelling safety as a property of data, we can formalise.

Formalisable processes can be applied automatically, which then consume no employee time, known to be the biggest factor in the cost of software development. Because all information for this process is local, it can easily be applied by programmers as they programme, and when it is violated, understanding the necessary remedy should be relatively easy.

More generally, we prefer locality of reference, because if we can individually prove a property about a unit, we should be able to move (or apply) that unit to another context, where the property will still hold, and we will not have to re-analyse the unit in the new context. This means that we can change the programme which makes the context for that unit without analysing the effect on that unit in that respect.


Let me clarify: I don't agree with his initial premise. For example, I think his first solution is the correct one: immediately format input into a canonical form. This works in his database issue because if you're going to write a to a database, you may as well use the form you have it in. If you would like to print data from the database in a certain way, use the certain way's printing function.

string name;
name = EncodedRequest("name");;

I agree with you that we want to automate the verification of these functions. I simply don't see how Joel is suggesting that a name mangling scheme is going to make this better.

Because it is a primitive for

Because it is a primitive form of type system.

format input into canonical form

He's badly mixing up the examples there, but I think he actually dismisses that solution because people want things to be stored in the database in a canonical form. Problem is this canonical form isn't suitable for outputting unencoded, and the `encoding' is different for different types of output:;
HTMLWrite(HTMLEscape(name)); // changes < into &lt;
LaTeXWrite(LaTeXEscape(name)); // changes \ into \backslash

but why?

Firstly, I am glad I am not alone calling Joel on this.

As to your main question about teaching Joel to use a statically typed language: why? There are already decent statically typed languages available to do this kind of thing. An even dynamically typed languages could work, as long as the restriction (of urlencoded strings in the original example) is expressed in the type system and sufficient unit testing is done.


Sorry if I've been unclear. My point is not to teach Joel to use a statically typed language, but to assess the development of our tools by seeing if it is practical to solve this problem with the Right Thing in an economically viable amount of time. Though he was using pseudo-code, let's assume he was using Javascript, where there is no statically typed alternative available.

What do you mean by The Right

What do you mean by The Right Thing if you're not allowing the solution of using a statically typed language? Catching logic errors by performing a static analysis is what static languages are for.

What I Mean

I am allowing a statically typed language, as this seems to be what Joel wants. My comment was in response to the previous poster, who said:

There are already decent statically typed languages available to do this kind of thing

I was merely saying that for Javascript there is no statically typed alternative.

Well, there are decent static

Well, there are decent statically typed languages available. Why not compile to javascript if you have to use javascript, or, statically typecheck your javascript? Of course, the kind of thing Joel is talking about is server-side processing, so you can use a statically typed language.