On the (perceived) equivalence of static and dynamically typed reflective programming languages

Hi. I'm new here.

I've dug up some old topics on the issue of the dynamic/static-typing cold-war/debate, and considering this to be a predominantly pro-static, all-determinism and pro-PF message-board, I would lay down this question as a test of either my misunderstanding, or as a fair argument for a particular point which I think hasn't been mentioned.

One claim that's been in many topics (especially on this forum) and recent papers on modern type theory (even though I have no idea about its origins) is the possibility to construct a 'Universal' type to capture the notion of run-time typing:

data Univ = Inte Int | Str String | ...

This has been used to justify the claim that, since such a hack can be used to bypass the type-checker, statically typed languages can directly express all programs in dynamically typed ones, but such a statement isn't true in the reverse.

One argument that has been thrown against this is that, currently, this isn't very convenient (especially in SML, Haskell and any other widely used language with type-inference). One would be forced to work with explicit type-constructors, functions would have to be 'lifted' (lift (+) :: (Int -> Int) -> (Univ -> Univ)) in order to operate on data of this sort, and would have to be 'flattened' again (Univ -> Int) to be of any use in order to interact with the outside program.
One rebuttal has been made is that the blame lies /not/ with type-systems in general, but how they are currently used and implemented.
And so I attempt to debunk this claim, by stating that the perceived shortcomings of latent typings lie not with the concept as a whole, but with how (the majority of) current languages (Ruby, Python, PHP) go about it. I should mention now that I am most familiar with Scheme (and Lisp in general) and intend to use this to put forward my arguments.

Static typing is essentially a method by which a program can be mechanically checked to guarantee that when two functions (f and g) are composed (f o g)( x ): x belongs to the domain of g; the codomain of g is the domain of f; and as such all functions and/or operators are consistent in this fashion. I know very well that static typing has a much greater potential than that (as evidenced in the possibilities of dependent types, GADTs, type/module hierarchies and so on). This checking is performed during program compilation (in other words, prior to the program actually being run) and does NOT add any capability or structure to the code, and has no effect upon the execution of the algorithm.

Many modern (mostly dynamically typed) functional languages are implementing a form of compile-time metaprogramming, and allow direct access to manipulate or query the source code of the surrounding program. My question is, isn't static type checking already possible to implement through that, and therefore a dynamically-typed programming language with metaprogramming features (like Lisp) is MORE expressive in the criteria of compile-time validation. PLT Scheme programmers will be aware of this if they've ever read up on or used MrSpider and/or MrFlow.

Of course, statically-typed languages with such features would be capable of doing the same. But I'd like to digress for a moment to say that the pain of Template Haskell, F# and integrating features like macros on them is plain hell.
Anyway, to deny the above the above statement (not about the criticism of Template Haskell which is sure to start a flame-war) is plain hypocritical for advocates of static-typing and the argument about its superiority previously mentioned. A programming language with features allowing compile-time reflection and processing would easily be capable of isolating a few-modules, marking them 'typed' and sweeping them with the Hindley–Milner magic.

Of course, this would all be quite impractical. Implementing / extending a type-system is a difficult task (even though so far I'm cool with the current and most popular soft-typing attempts) but the same could be set for the Universal type constructor and all the baggage that comes with it. Getting the two worlds to talk to each other is a pain.

Static typing is a limited form of compile-time sweeping. Q.E.D?

Once again, the playing field is level, and the choice between the two is almost entirely about preference.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

where you are wrong

You are wrong in assuming that nobody agrees with you. That's a swell attempt to articulate it.

-t

I don't think either is more

I don't think either is more expressive than the other, I just think statically typed languages are safer. All things being equal, I would thus prefer a statically typed language. Of course, all things are not equal, so each choice must be weighed very carefully. Static typing buys you a lot of safety though, so there have to be very compelling reasons to go the dynamically typed language route IMO.

I think once again the

I think once again the age-old argument about the flexibility and simplicity of dynamically-typed languages when compared to even the best of the breed in the static world applies. They're strictly equal, and (let's be honest here) exploratory programming is a whole lot easier in a world where conditionals can return two distinct types, procedures have a variable arity, and having many flexible (though not quite as predictable) paths through the code is a whole lot nicer.
As long as Rice's theorem is true, there will always be a grain of truth in the above paragraph. And *chuckle* unless we're debating unicorns here, we don't have to lay down any bets.

I am ready to lay my hands down and admit that if your software is going towards operating on a multi-million dollar space module, surgery on heart-disease patients or regulating the pressure in a nuclear reactor, then it would be wise to use every static method in the book. In fact, in such cases (especially when lives and careers depend on it) it would be criminal not to resort to the most stringent forms of code safety (whether it involve hundreds of unit tests, contracts upon input and output, and stress-testing it to the very end) without releasing it first, going 'oops!' and releasing a patch every few hours whilst money is being lost and hydrogen missiles are spreading fireworks over your city skyline.

Static Typing and Flexibility

I've worked with statically typed languages (admittedly, not production ready languages) where you can (ad-hoc) return values of two distinct types and automatically have a type union. Of course this language didn't use Hindly Milner type inference (indeed, it didn't infer 'types' for each expression, but only whether each expression is 'type compatible' based on flows of control). A function could also accept values of any 'type' for pattern matching.

Monomorphism, and monomorphic type inference (as seen in Haskell, ML, etc.) may still be too 'rigid' to reach the dynamism of dynamic typing languages. The idea that each expression must 'best match' one and only one type with an expression is problematic.

But, given the possibilities for implicit, semantics-preserving structural transformations and for type-dependent user-defined syntax ultimately require an understanding of 'types' compatible with static typing (such that the programmers can specify it), I believe that static or soft typing will, in the long run, be even more dynamic than 'dynamic typing'.

As far as functions with variant arity and such, that is generally something determined at compile-time. It could be handled by a pattern-matching macro language; I'd hate to call it a 'benefit' of dynamic typing.

I can buy the hypothesis

I can buy the hypothesis that dynamically typed languages are more convenient for exploratory programming, but I'd counter with the hypothesis that statically typed languages are better for well-structured design. Many is the time that I'd concoct a design in my head, only to have the typechecker barf at me, and after a few minutes of thought, I'd realize that my design is not nearly as simple and coherent as I'd originally thought. A few more minutes of restructuring for more sensible types yielded a far better design.

So perhaps the more appropriate metric in a given situation is whether one is doing prototyping or design.

I don't agree. I find

I don't agree. I find exploratory programming easier in a statically typed language. The types force me to think straight.

The need to "think straight"

The need to "think straight" is what many people find problematic with types when it comes to "exploratory programming". Think about it this way: it is difficult to maintain one's mental bearings while in the middle of a brainstorm.

And when you're exploring a new idea to solve one problem, the last thing you want is some pedantic Flight of the Phoenix engineer interrupting you and refusing to get out of your face because the ship is 'unsound' or you haven't satisfied one of his many intrusive demands. (If the engineer can find these problems, why can't that bastard jury rig a solution?)

But, still, I'd bet money that static typing itself isn't the primary culprit in undermining exploratory programming so much as some properties strongly correlated with static typing due to the common compile-time environment, including the common (but completely unnecessary) requirement to introduce boiler-plate code just to get stuff up and running, and including the common requirement to pre-declare types rather than wholly infer datatypes (including their structure).

The advantages of dynamic languages for exploratory programming certainly have little to do with 'expressiveness', at least not in any formal sense of the word - the ability to express more concepts (expressiveness) and to modularize them so they can be expressed directly (expressive power) is distinct from the ability to write and run incomplete programs or to quickly test program fragments in an REP loop.

The Price of a MetaLanguage

A language that offers thorough control over the post-processor pipeline - enough to add one's own typechecker (or optimizer, or semantics, etc.) - certainly can be quite expressive. It would be possible in such a system for an inherently dynamic language to gain the benefits of typechecking.

But such features do have a non-trivial cost when it comes to modularity and hooking together program components built by multiple groups (possibly with ownership issues across boundaries). It would not be a simple task to design a language that can integrate its own 'special' postprocessor in a manner that does not interfere with the 'post-process' created for these distinct modules. I.e. every group ends up implementing its own, personal set of post-process components, and suddenly there isn't one coherent 'language' with which to piece together programs. Instead, there are many languages, constructed in a meta-language but possessing divergent semantics.

There have actually been quite a few comments related to these issues on LTA, especially those related to arguments for/against use of macros. Macros (and other preprocessors) that perform a lot of code-walking generally cannot be integrated. That is, they are unaware of one another, tend to make different assumptions, and easily interfere with one another.

Thus, in order to piece multiple components (e.g. different libraries, frameworks, or modules) together, one would typically need to limit the scope of post-processing and code-walking efforts that might otherwise cross component boundaries. Each module can be its own little 'internal' universe, but one needs some common interstitial rules to piece them all together.

And, in a dynamically typed language, the natural rules to fall back upon are the dynamic ones. If you needed protections from other module inputs (and you would, if your own meta-programming typechecker is to pass your program), you'd need to resort to gatekeeper methods that perform additional checks on inputs that convert the good ones and filter out or throw an exception for the bad ones. Thus you'd still end up paying a (lesser) runtime cost for all these checks - lesser because you could avoid such checks after you get past the gatekeepers. More relevantly, however, is that your module's interface would be back to being less 'expressive', and not indicating to other users the sorts of inputs it requires.

...

With static typing, there is (of course) a dual problem in that different groups often come up with different 'types', and so attempts to integrate components will often require conversions in value structures in order to meet the input demands of modules, and conversions again upon receiving the outputs. This has a similar runtime cost. Even if a group bothered coming up with a 'Univ' type, you can bet your bottom dollar that no two groups would independently come up with the same 'Univ' type, and so one would end up converting from 'UnivA' to 'UnivB' and back. It wouldn't be convenient, and it would be rather pointless due to the loss of expressiveness. Without using 'Univ' types, the modules express (not fully, but certainly to a greater degree) what they require, thus leaving static typing more expressive than dynamic typing.

Where dynamically typed languages benefit over statically typed ones is in at least having a common, global 'Univ' type that is properly handled by all functions. The value of this cannot be underestimated; it is of the same benefit as having common idioms for error handling or common approaches to security integrated across all components of a language. It is especially beneficial when you choose to start mucking around with class-objects and such.

As far as the advantages in syntax (dynamically typed vs. static 'Univ'), that could be handled in a static-typing language via use of implicit conversion functions and/or an extensible grammar (syntax-only metaprogramming).

"As far as the advantages in

"As far as the advantages in syntax (dynamically typed vs. static 'Univ'), that could be handled in a static-typing language via use of implicit conversion functions and/or an extensible grammar (syntax-only metaprogramming)."

Wouldn't it be possible for a dynamically-typed language to provide a default (though optional) library or mechanism for static typing, and have automatic conversion between them? The dynamic modules could easily call the static ones without any boilerplate as the 'Universal' type is a strict superset of all types in the static universe (and could therefore be handled by a single transparent functor), but unfortunately the caged static modules would have to remain ignorant of their dynamic surroundings without resorting to another ugly hack.
I think that having a standardized library for static type checking in a predominantly dynamic+strongly-typed language would be just as effective as having a strictly-typed language with a transparent interface to the dynamic world.

Again, it's all about preference.

Wouldn't it be possible for

Wouldn't it be possible for a dynamically-typed language to provide a default (though optional) library or mechanism for static typing, and have automatic conversion between them?

A dynamically typed language could provide a default library or mechanism for static typing. Making it a 'standard', as you suggest, would indeed alleviate many of the problems associated with it. And what you'd end up with is quite close to 'soft typing' - a language where parts of the language are checked at runtime and parts are checked at compile time (often to achieve certain optimizations).

But as far as 'automatic conversions' go, that implies there is something to convert - which usually means converting values between types. It is unlikely a dynamic typing language would define a whole additional set of values just for static typing; rather, 'static typing' in such a language would more realistically consist of attempting to statically guarantee certain properties of its normal dynamic values based on known flows of control. E.g. if a function accepting obj requires obj.x and obj.y, it might test that '.x' and '.y' are available before they are passed to the function... and that they, too, are compatible with wherever they are passed, and that the return value will be compatible with how it is used.

Keep in mind, further, that supporting a standard form of static typing - even an optional form - often requires giving up a variety of other metalanguage 'features' that are commonly associated with some dynamic languages, such as the ability to muck around heavily with the workings of 'classes' at runtime. Well... you need to give up such things or accept that the language shall have inconsistent or ill understood feature interaction.

the caged static modules would have to remain ignorant of their dynamic surroundings without resorting to another ugly hack

I'm not certain how "ugly" it would be. One certainly couldn't call dynamic surroundings and remain statically type-safe. But vice versa is true too: dynamically typed modules can't automatically gain static type-safety by calling statically typed modules. It would come down to gatekeeper methods; ideally, gatekeeper methods in a language with a 'standardized' static typing system could be made implicit. One could still benefit from static typing within the module that uses it (where it would help with optimizations and internal safety for what are possibly complicated pieces).

The ability to call into dynamic modules is worth maintaining, especially for framework implementations.

I think that having a standardized library for static type checking in a predominantly dynamic+strongly-typed language would be just as effective as having a strictly-typed language with a transparent interface to the dynamic world.

In many ways it would be. Just keep in mind that it couldn't be done for free. That is, a dynamic language with a standardized library for static type checking will have some of its own dynamism 'dampened' in order to make the standardization possible - especially when it comes to dynamic forms of metaprogramming, which it would need to trade for more static compatible forms.

Typed Scheme

As I understand it, Typed Scheme is the existence proof of your argument. Static typing for Scheme implemented via the macro system.

I think this mis-states what static types do

Static types handle more than function composition. What they do is show, at compile time, that a program will not incur runtime range or domain errors. For example, that in all cases where "+" is applied, it is defined over the arguments used.

But assuming correct implementations of each, the difference between the two is purely a matter of when errors are diagnosed. The only sound argument that static typing is better is an argument based on debugging cost: errors caught earlier in the lifecycle are demonstrably cheaper to repair than errors that are caught later. Or alternatively, in critical systems the need to prevent errors in the field is overriding, and testing simply is not good enough at all. The argument against static typing seems primarily to do with convenience and speed of prototyping. Speaking purely as a user, I am not compelled by the proposition that programmer convenience is more important than robustness of the program that I run. The prototyping issue, however, is very real, and this is one reason that BitC retains sound and complete type inference.

There is another argument against static typing having to do with expressiveness. It is difficult, for example, to type heterogeneous lists without appealing to type systems that do not infer automatically. In my opinion, there is a continuing investigation in this space, and the expressiveness gap between "inferrable static" and "dynamic" seems to shrink daily. At some threshold, the argument for dynamic typing will disappear.

I just mentioned Rice's

I just mentioned Rice's theorem in response to naasking's comment. Dynamic typing will always be more expressive, and we have a few decades left until the rigidity of static typing finally compensates for its considerable loss of productivity. Even the most academically-inclined languages currently in existence (and I've tried Epigram, Qi and Charity among others) won't get you that far, and I'm very cynical towards the claim that they will any time soon.

Code completion

Modern IDEs for statically typed languages can use type information information to provide a list of applicable methods at edit-time. In my experience this is a very good way to learn a new library. Can you do this for dynamically typed languages?

Perhaps a coherent, if

Perhaps a coherent, if informal, metric for expressiveness is called for, since I'm not sure I buy the argument that dynamically typed languages are "more expressive".

Expressive vs. Computable

I think you confuse the two. A static language that expresses a required property is more expressive than a dynamic language that cannot express that this property is required regardless of whether that property is generally decidable or not.

And most properties that are undecidable in the 'general' case prove to be readily decidable in most specific cases - which especially include those specific cases to which a programmer can effectively program. Rice's theorem doesn't have the practical impact you think it might.

rigidity of static typing finally compensates for its considerable loss of productivity

I'm aware that static typing is associated with many languages that have a productivity cost in boiler-plate code and such, but I'm fairly certain that most of this cannot be blamed on the static typing itself. More often the difference falls to the different target taskings of the different languages, and the different libraries available. If you start messing around with high level libraries and macros, you start becoming productive in any language. I've read somewhere that people code about the same amount (in lines) of code per hour no matter which language they use... at least once they are skilled with it.

Static types of partial functions

Static types handle more than function composition. What they do is show, at compile time, that a program will not incur runtime range or domain errors. For example, that in all cases where "+" is applied, it is defined over the arguments used.

What about division (by zero)? I have actually "programmed" in a language which prohibited division by zero at the type level. It was a complete pain because of what boils down to "trust me, this can't ever be zero" annotations, which I had to put into many places. (Violating one of the annotations would result in run-time assertion errors, not crashes, for the record.)

This is a case where the axioms lie

The problem here wasn't a failure of static typing. The problem was that divide was underspecified by the static type system. A correct specification of divide would be one of:

    divide: int int -> int | (raise DivideByZero)
    or
    divide: forall x:int
      x == 0 => divide int x -> raise DivideByZero
      else   => divide int x -> int
    

In the first example, I'm trying to appeal to union types. In the second I'm trying to appeal to dependent types. The problem is that classical static type systems don't support dependent types. In simpler type systems, the "excuse" is that the output type doesn't matter when the function doesn't return, and raising an exception (in that view) constitutes a non-return. I'm not defending, merely explaining. Regrettably, we adopt a similar lie in BitC, because exceptions are not required to be declared, though from a core semantics standpoint we do treat all returns as union types (intended|execptional) and propagate that. This doesn't eliminate the annotation problem.

How is raising a runtime assertion error different from crashing the program? In either case execution halts, right?

OT: runtime assertion vs crashing

How is raising a runtime assertion error different from crashing the program? In either case execution halts, right?

Runtime assertion delivers a more or less helpful error message. With crashing I meant random undefined behavior (think segfault).

A couple of further minor points

In response to naasking: statically typed languages are clearly less expressive: they reject observably bad programs.

In response to David: your point that metaprogramming tends to be limited to one compilation unit is well taken, but this can be read as a challenge...

I wouldn't expect the

I wouldn't expect the expressiveness of bad programs would be a metric of interest.

I laughed out loud when I

I laughed out loud when I read that comment, haha.

To extremify the point, suppose that we remove the need for syntactic checks from our language too, so that every sequence of characters would be a valid program. Is this then the most expressive language possible? No.

I agree with naasking. Clearly "how many programs" the language processor accepts is not exactly an important measure.

Go for it!

In response to David: your point that metaprogramming tends to be limited to one compilation unit is well taken, but this can be read as a challenge...

Go for it then. I've already covered that ground a great deal myself in my own language design, but I'd be happy to steal someone else's solution if it works. ^_^

statically typed languages are clearly less expressive: they reject observably bad programs.

Lol. Both static and dynamic typing languages allow you to express bad programs; it's just that in dynamic typing languages you cannot usually express (at least to the interpreter) which properties the program is intended to possess. It is the ability to express what is a 'bad program' and have an interpreter or compiler understand and enforce this description that makes statically typed language more expressive.

But as noted in the topic statement, this expressiveness can be bought back in a dynamic language (at least in part) by use of compile-time metaprogramming.

Of course this isn't a binary deal. There are further degrees of expressiveness among type systems. One type system is strictly more expressive than another if it allows the programmer to express an identical set of properties plus at least one more. The ability to express with greater precision which programs are 'bad programs' results from greater expressiveness. A language that attempts to hang onto its vestigial dynamic features may fail to achieve potential expressiveness, even given use of compile-time metaprogramming.

Infrastructure issues

Take a look at:

Ryan Culpepper, Sam Tobin-Hochstadt, Matthew Flatt (2007). Advanced Macrology and the Implementation of Typed Scheme. Scheme Workshop.

Certainly the static checks can be performed using macros in a dynamic language, but doing so requires coding up substantial infrastructure. I don't know of any dynamic language that makes this kind of thing easy, and there are well-known problems trying to mix code following a type discipline with code that does not.

There's certainly room for interesting work here. I'd say that the PLT scheme folk are the people to watch right now.

Postscript: I had missed Noel's earlier comment when I wrote the above.

Please read this if you want to contribute to this discussion

I think that all the responses that I've read so far have missed the point entirely, and perhaps I'm partly to blame for that.

I claim (with almost absolute certainty) that a dynamically typed language without any reflective or metaprogramming capabilities is inferior to a statically typed language with those same limitations. With static typing, one can introduce a type union which can essentially bypass (or make no use of) the typechecker. To me, this isn't just an opinionated claim, it qualifies as an informal proof. Q.E.D.

The purpose of this thread was to say, with absolute clarity, that in a language with static typing, all type checking and analysis is performed at compile-time, and is set (and essentially limited) by the language designer. Any programming language with high-level metaprogramming features will give you the ability to boot on such a feature at will AND MORE. Q.E.D.

This topic was never about the actual merits of dynamic versus static typing or metaprogramming. I have never seen the point which I have made above stated in any bulletin board which I have come across and read. Unless you can (with absolute clarity) refute that statement, then any contribution you make will be entirely off the radar. But if you do find fault with any of the claims I've made, then please inform me to the contrary.

The mistake that's been made (by others and myself included) was in arguing about their opinions on this issue. Having a language with variable semantics and extensible syntax has its problems; fine. Dynamic or weak typing guarantees almost no type-safety at all, and when managing a large project, it makes you mad; okay. Having independent, unofficial type systems circulating makes portability almost impossible. But these are irrelevant opinions and beside the point. This thread is about a logical claim that can either be absolutely true or entirely false, with no 'maybe's or in-betweens.
I don't care about your personal experiences or inconveniences which you've stumbled across when working in a language which doesn't utilize a static type system. To me, this is like debating an established religion or trying to convince an absolute stranger why you prefer the colour blue, or why your favourite football team is superior to the one he / she supports. So many Internet forums are full of this nonsense that I'm not willing to argue on this issue any more.
Keep your contributions relevant.

Question

Sincere question: what's the benefit of a dynamically-typed language, with or without metaprogramming facilities, over a statically-typed language with a Turing-complete type system (e.g. Cayenne or Qi)?

Insufficient Information

Knowing that the static type system is Turing complete is insufficient information to decide the relative advantages. Manifest typing has some severe disadvantages when it comes to modularity, for example, but not all static type systems are manifestly typed. And the types available, and how they may be applied, also have significant impact.

Who is missing the point?

My first reply was directly relevant. You claim: "Any programming language with high-level metaprogramming features will give you the ability to boot on such a feature at will AND MORE. Q.E.D." In practice, this is not true. Why? Because actually doing so is incompatible with modularity, symmetry, and other desirable language features.

To be compatible with modularity you might attempt to 'standardize' a single type system as part of the dynamic language. This would have many of the same impacts on the language as would having made it a statically typed language to begin with... except (due to the additional feature of making the typecheck 'optional') lacking greater support for typeful programming.

Your claims are also internally inconsistent in a manner that at least one of them must be incorrect:

  1. With static typing, one can introduce a type union which can essentially bypass (or make no use of) the typechecker.
  2. in a language with static typing, all type checking and analysis is performed at compile-time, and is set (and essentially limited) by the language designer.
  3. any programming language with high-level [compile time] metaprogramming features will give you the ability to boot on such a feature at will

So which is it: is static typing 'limited' by the language designer, or can it be bypassed by use of a type-union allowing other high-level metaprogramming features to boot on alternative such features at will?

I have never seen the point which I have made above stated in any bulletin board which I have come across and read.

I have seen it many times, though worded differently each time. The point that one may implement alternative static type systems, languages within languages, and other such features occasionally, but with some regularity, rises in discussions about the merits of metaprogramming (especially compile-time macros, programmer-controlled partial evaluations, extensible syntax or semantics, etc.).

It is especially common to point such features out when young and budding language designers go through the common "let's consider a language built entirely out of macros" phase. With a Turing complete Macro system, such a thing is possible. But actually digging deeper usually reveals why doing so isn't such a magnificent idea - and it basically comes down to the fact that deep, code-walking forms of user-defined metaprogramming interact in ways that are difficult to predict and can cause errors that are not obvious to the programmers.

There are more examples than just with adding a type system atop a language. Consider the dual efforts:

  • provide STM in the language via use of an 'atomic' keyword around a block of variable mutation actions
  • automatically transform certain imperative expression into a reactive one that will perform a callback upon update

In addition to the local impacts, each of these goals will impact how the variables involved in the expression or statement are stored and processed upon update (which, for simplicity, might be applied to all variables). If the same variable is involved in both STM and in a reactive expression, it will be subject to both impacts. This will require some non-trivial resolution, such as dealing with reactive expressions both inside and outside of an 'atomic' block and how they respectively receive callbacks, and making certain that a reactive expression outside the atomic block doesn't receive the callback until after commit.

Worse, the 'dual efforts' can also be duplicative. Two different modules/frameworks/libraries can each use a different STM library, and these STM libs may be unaware of one another even when acting on the same variable. And the same for two different garbage collectors or smart pointer systems, two different unit testing frameworks, two different reactive-programming solutions, or two different type systems. Since these are probably competing solutions, it is very unlikely they'll ever be made aware of one another.

It is the job of a language designer to resolve such problems, determine policies for feature interaction, and pick singular solutions as the 'standard' so that cross-cutting problems are solved once and only once in order to provide greater language symmetry, inter-module consistency, and to avoid or reduce artificial conflicts between libraries and modules.

It doesn't matter whether this 'standard' is made part of the language versus part of a 'standard library' of features enabled by language-primitive metaprogramming. The distinction between a language and its standard library is a frivolous one: a language following a different standard library is effectively a different (and often incompatible) language in practice, and this only becomes truer the more 'cross-cutting concerns' are embedded into the standard library.

Your note about the potential for compile-time metaprogramming as a vector for a user to introduce neat little type systems and such is well taken, but must be considered from a wider perspective of how the language will actually be used, and how such a 'type system' will interact with other metaprogramming-based features. Your note that one can standardize this to help resolve conflicts is also well taken, but needs to be considered relative to how such a standardization, when applied to any large degree, effectively interferes or interacts with user-defined type-systems.

With static typing, one can

Contradictory? How?

  1. With static typing, one can introduce a type union which can essentially bypass (or make no use of) the typechecker.
  2. in a language with static typing, all type checking and analysis is performed at compile-time, and is set (and essentially limited) by the language designer.
  3. any programming language with high-level [compile time] metaprogramming features will give you the ability to boot on such a feature at will
Static typing Metaprogramming Result in terms of expressiveness
Without Without

All type-checking at run time if strongly typed.

With Without Dynamic type can be declared. Type checking at compile and run time. Type checker limited to established standard.
Without With Dynamic type by default. Type checking run time, and possibly at compile time too, plus extensibility and the possibility to evolve along with research and demand.
With With Dynamic type can be declared. Type checking run time, and at compile time too, plus extensibility and the possibility to evolve along with research and demand.

Listed from least expressive (top) to the most flexible (= bottom two).

Notice that the type system can evolve with research and demand too (on the bottom two rows). Static typing on its own is pretty... static, as it can rarely accommodate any major change.

Other than that, I'm going to have a long and hard think about all the other points you've raised. :)

Contradictory

where you said "and essentially limited".

Static typing on its own is pretty... static, as it can rarely accommodate any major change.

Same can be said of dynamic typing on its own.

Metaprogramming is something I fully promote, despite knowing how it can easily provide you a big enough gun to hang yourself with. In practice, it needs to be heavily tempered by standardization to avoid duplicative efforts, but still needs to provide enough flexibility to explore new ideas so one can reasonably decide how things should be standardized after trying them out.

Funny, I had a rejected

Funny, I had a rejected OOPSLA submission trying to think about functional reactivity in an imperative setting, which, as I went deeper into it, hit a lot of ideas from the STM world :)

Enforcing

Consider the following well-known definition, due to Reynolds, according to which type structure is

    - a syntactic discipline for enforcing levels of abstraction.

Compositionality issues aside, I don't see how meta programming can enforce anything, unless the meta language can prevent itself from being used so that no client code can change/disable/sidestep/undermine your custom "type system". As other have pointed out already, this is central to what a type system is all about.

And yes, approaches along these lines have been discussed on LtU before. I don't have a pointer ready, though.

[NB. I would advise you to reconsider the slightly offensive tone of your last post above.]

Again, you're outlining the

[Serious editing]

The topic at hand is the expressiveness of dynamic / type systems under certain conditions. If one were to construct a Venn diagram, the Python / Perl oval would be completely buried by Haskell, Java and whatever. The same diagram would have BOTH Common Lisp and MetaML as one larger bubble above that (but one and the same anyway).

Regardless, there's no doubt that a meta language can undermine the established type system (if one is present). Then again (not to counter your point) since the field of type theory is still only developing, I think having the ability to extend it at will would be interesting and preferable. Paul Graham refers to a language with such power as a '100-year language' which can continuously evolve to meet the demands of its users. Free market theory, the supply / demand of features allow Lisp users to scratch their heads whenever they hear about 'Object oriented programming', 'Domain Specific Languages' and whatever fad or buzz word becomes the next craze. A gradual evolution of ideas, paradigms and approaches to problem-solving I believe is needed.

On the other side of the spectrum, of course, is no portability, no established standard and the ability to undermine the current standard and norm (which the majority of you who have responded are concerned about): Everyone has their own library, reinvents the same feature in different ways, and the community is split.

I think a compromise has to be met. Then we could have a language which could survive for more than just a few years or decades, but perhaps centuries.

Anyways, this is all just off-topic noodling. It's an interesting discussion / challenge.

The burden of a proof

Again, typing is all about enforcement and guarantees. You made the claim that some meta-programming approach somehow subsumes typing. As long as you cannot prove that the "type system" can not be undermined in such an approach, you have not proved your claim. So far, you haven't provided such a proof, or even a sketch. You haven't even acknowledged its relevance.

I'm going to eat my words

I'm going to eat my words and point to MetaML, Qi and Template Haskell.

If someone could point me to a recent thread or paper on this issue, it would greatly aide this discussion.

Disagree

Type systems don't need to prevent undermining just to qualify as type systems. There is a reason we have words for "weak typing" and such. The ability to declare violable assumptions (such as casts) isn't unusual. Now, if Ran's claim uses the word strong, static typing somewhere, then the "no-undermining" clause applies.

Same with using metaprogramming for security proofs.

He didn't restrict his claim

He didn't restrict his claim to weak type systems, so I assume it is meant to include sound "strong" ones.

Relevance

I'd conjecture that (coming from an ML background) you'd think that the relevance is that it is difficult (if not impossible) to prove the soundness of the types using meta-programming facilities. And as long as one can't show that the type system is incapable of being subverted (like Standard ML), then one has failed to subsume the type system of ML.

(QED and all that).

I don't see how meta programming can enforce anything, unless the meta language can prevent itself from being used so that no client code can change/disable/sidestep/undermine your custom "type system".

Of course, if you kick away the dynamic ladder that got you to that point, the end result is that you are programming in a static language anyway.

Incompleteness

Of course, if you kick away the dynamic ladder that got you to that point, the end result is that you are programming in a static language anyway.

Exactly. And to avoid that, the defined type system would need to be able to type the dynamic meta programming facility itself, and thus ultimately be able to prove that its own definition is type safe. Of course, we know that no interesting logic can prove its own consistency. So this will never work.

LtU ethos

As near as I can tell, the LtU collective does not have a borg-like opinion on the static vs. dynamic debate.

From an historical standpoint, Frank Atannasow (who's spends way too much time on graduate studies these days, and not enuf on LtU), was the main protaganist for the dynamic languages are Unityped argument. If so, then his retraction of the argument might be noteworthy.

Frank's retraction was quite

Frank's retraction was quite limited in scope. He was retracting the following statement:

Untyped languages are a proper subset of typed languages.

What he really meant to say was 'unityped' languages are a proper subset of typed languages. A unityped language can express every datum as belonging to a single type, which one might call 'Univ'. This type may be a tagged union.

Kevin's argument, to which Frank responded second, is quite similar to Ran's topic argument - about expressing static programs in a dynamic language via use of a macro system. Ran would do well to reference it as proof that this sort of thing does come up in these forums and elsewhere. ^_^

As near as I can tell, the LtU collective does not have a borg-like opinion on the static vs. dynamic debate.

Very true. The static vs. dynamic debate is fodder for holy wars even here.

:-)

David Barbour: The static vs. dynamic debate is fodder for holy wars especially here.

Fixed that for you. :-)

We can be better.

We can be better.

Of course.

All joking—which is primarily aimed at myself anyway—aside, I've always found the discussions here, even when heated, considerably more informative and constructive than anywhere else. But I've been semi-deliberately trying to do more and talk less, which is one reason I haven't weighed in on this particular thread other than to make a joke. :-)

Weighing in

The problem I see with the argument is that what macros effectively do is allow one to build another language on top of an existing language. The language that is built is no longer the same (dynamic) language that it was built on top of. I don't think this resolves the dynamic/static question so much as point out the advantages of macros that can be used to create new languages on top of existing ones. In creating those new languages, one has the freedom (and difficulty) to alter the tradeoffs that are inherent in designing new languages.

So, I don't think the rumination from a theoretical standpoint is as interesting as looking at those efforts that try to realize this vision (e.g., Typed Scheme). How smoothly that language plays with the underlying language, how well it balances dynamic run-time behavior with static type checking (staged or not) - while retaining the benefits of both, and how practical it is to express the abstractions involved in our common (or niche) applications is where most of us programmers would judge the merits of the PL.

I'd judge the argument as showing the innate advantage of meta-programming (macros). And the practical upshot would be that macros are typically easier to implement on top of dynamic languages because the syntax approaches an abstract syntax tree (Scheme/Lisp in particular). Static languages could theoretically do the same, but the syntax of the language rarely is as close to the underlying AST. How best to expose an AST from a static PL is still an open question (Guy Steele is involved in efforts to do this in Fortress).

The practical upshot

And the practical upshot would be that macros are typically easier to implement on top of dynamic languages because the syntax approaches an abstract syntax tree (Scheme/Lisp in particular). Static languages could theoretically do the same, but the syntax of the language rarely is as close to the underlying AST

There isn't sufficient evidence that this is a consequence of the language being dynamically typed. Asserting that "the syntax approaches an abstract syntax tree" in dynamic languages in general based on its doing so in Scheme/Lisp is quite the premature generalization.

In any case, the typechecking activity usually occurs after macro expansion and the completion of parsing. And metaprogramming facilities such as camlp4 or those found in languages like D from Digital Mars should provide some counterexamples.

A combination of syntax modification, compile-time resolution (such as programmer-controlled partial evaluation, potentially even access to databases and such), and first-class types (type-descriptors that are subject to evaluation or resolution) is sufficient to achieve some rather arbitrary metaprogramming in a statically typed language.

The main advantage dynamic languages have is the potential for decomposable or fully composable blocks or functions; static languages must ensure that the types are compatible with the surrounding application, which is difficult to do with 'eval' style statements hanging around (at least without an 'any' type). However, this degree of composition isn't, in practice, required if one stages the syntax manipulation (such as macros) separately from the other compile-time resolution mechanisms.

The difference between theory and practice...

...as the old addage says, is that in theory there is no difference. I'd say in that, in theory, macros are independent of the whole question of static/dynamic. Indeed, the introduction of meta-programming into a language means that you can theoretically make a language blend into any point on the static/dynamic range that you so desire (with the background proviso that writing PLs is a rather difficult task). Meta-programming effectively means that you are programming the programming language.

So, yes, I'd agree that this is not really a static-dynamic issue. This is more a question about meta-programming. A PL that has meta-programming facilities is, strictly speaking, capable of being more expressive than one without macros. But framing the postulate in that manner doesn't quite evoke the fight-or-flight instincts that are associated with the static-dynamic debate. And many programmers would weigh in that they don't want the flexibility of macros polluting the syntax/semantics of the languages that they have come to know.

Perhaps we don't, but it

Perhaps we don't, but it does sometimes seem that each side of the collective believes that the other should be assimilated...

All joking aside, I think that the discussion here is both mistaken and uninteresting. Mistaken in the sense that metaprogramming does not intrinsically convey the ability to subvert the static type system in a statically typed language, and also in the sense that many static type systems do not admit Univ or Dynamic, and those that do are (arguably) not fully static type systems. I don't intend a value judgment by saying this; I merely observe that the discussion seems to be beating the wrong dead horse to death.

The portion that is uninteresting concerns relative merit. The difference between the static and dynamic systems has to do with how late certain errors are identified, and whether a certain class of errors (runtime range violations, modulo things like the divide issue) can be said to be completely identified at any point. My personal opinion is that early diagnosis is better for most of my purposes, that complete diagnosis is imperative for my purposes, and that for these reasons I am prepared to sacrifice some hypothetical expressiveness (though in practice I rarely notice any loss). Other motivated choices may be sound; I take no position. The only positions in this debate that are truly stupid are the absolutist positions.

Being a closet pragmatist (okay, an out of the closet pragmatist), I want to expand on something I said earlier. From a pragmatic perspective, we are only concerned with the expressiveness gap between static and dynamic languages to the extent that this gap actually intrudes on some program that we actually wish to write. As things like dependent type systems are coming to close the type/value gap, the expressiveness gap between static and dynamic type systems as applied pragmatically to real programs is shrinking. To the extent that this is true, it may come to pass that this difference ceases to matter.

Mistaken?

Mistaken in the sense that metaprogramming does not intrinsically convey the ability to subvert the static type system in a statically typed language

I think you have misread that part of the discussion. The point in question was whether a (hypothetical) user-defined type system in a dynamically checked language, that has been defined using meta programming, can be subverted by the same meta programming capabilities.

Agreed with everything else you say.

Back to the original question

In one view, the type checker is merely a phase of the compiler. If so, then in abstract there is no reason not to implement that phase in a preprocessor or a suitable metaprogramming scheme. In practice, this doesn't work out very well for two reasons:

  • It is hard to get size-based polymorphism implemented efficiently in this way.
  • Internal re-use of the resolver and type inference engine within the compiler turns out to be extremely useful as a guard against compile-time errors in practice.

Of these, the polymorphism issue seems like the more compelling problem. We are going to run squarely into separate compilation.

In languages having size-based polymorphism and/or type classes, you either have to eat a lot of indirection overheads or you need to do at least a limited form of [dynamic] link-time instantiation. The alternative (classical) implementations rely on dictionaries, and when size-based polymorphism is introduced the overheads of that implementation technique start to become pragmatically intrusive. While there is no theoretical loss of expressiveness, there is a practical loss of expressiveness.

The gap here is that the information needed for instantiation to occur needs to be carried through to the runtime system, and that information is very closely tied to the static type system. If the metaprogramming system can build on a sufficiently powerful dynamic reflection system and dynamic code synthesis engine, this issue might be overcome, but in the end I wonder if the net accomplishment isn't merely to implement a static type system in a particularly unmaintainable way using about 20x as much code (that JIT system has to be counted!).

When we came to realize that BitC could not achieve fully static separate compilation, we were very concerned that we might need a JIT system. The problem of certifying a JIT layer was a huge undertaking that we definitely didn't want. For kernel stuff, static link time is fine, but for general applications we clearly require dynamic link time specialization. Thankfully, it looks like a very modest code specializer may be sufficient in practice. Perhaps even one that does not preclude formal verification.

But now that I think about it, the whole issue I am expressing here may be moot. I have never seen a language with a dynamic type system that could deal with prescriptive representation issues at all. In this respect, at least, the static (or hybrid) languages really do seem to be more expressive.

tightening the "proof"

If I have a dynamic/reflective environment and you show me a static/reflective language, I can in principle always produce a type-sound program which runs in my environment but which your static type checker can't handle.

Conversely, you can show me any static/reflective language and in my dynamic/reflective environment I can always implement your language and its type checker as modules and thus run any program you can produce in your language in essentially the same way.

So d/r can always "one up" s/r and s/r can never "one up" d/r. Dynamic reflective languages are distinctly the more powerful category.

-t

the point?

You show me your compiled environment, and I show you my assembly programming and macro language. For any snippet you write, I can run it in my assembly language (since you can always compile your code). However, you can't get all the functionality that I can write, since I've got complete access over all instructions, registers and instruction scheduling tricks at hand - whereas you always defer it to your compiler. Hence, my assembly environment is distinctly the more powerful category.

- PKE.

I got this bag of

I got this bag of transistors here, maybe I can run your programs for you? :-)

Taking the argument

Taking the argument further....

Any snippets in either your static or dynamic languages get executed on some processor, and I can encode your snippets' operation as an instruction on my Verilog processor.

So hardware description languages win over both, and beat out your assembly for that matter?!?

Chicken meet Egg

The problem with hardware description languages is that they are just too dang low level. We need to get back to a Turing complete programming language. A Brainfuck CPU designed in VHDL seems like the most viable option.

HDLs are not Turing complete?

HDLs are not Turing complete?

Which HDLs are not Turing complete?

Expanding on the previous thought a bit, which HDL is not Turing complete?

SystemC is a superset of C++ and the behavioral subset of VHDL is pretty much Ada. SystemVerilog approaches VHDL in abstraction and capabilities. Along with Verilog, these are the dominant HDLs in the marketplace today. Which of them is not Turing complete?

Let's also be careful not to imply that all HDLs are created equal. There is an enormous difference between Verilog and, say Lava or Hawk.

I hope you get my point.

I hope you get my point. There is a reason that we don't all write in assembly (or in Verilog), in the same way that there is a reason we want static type-checking rather than dynamic type-checking. Yes, I do concede that there might be corner cases where dynamic type-checking might be able to express code (even well-behaving ones) that you simply can't express using static type-checking. That is probably what you mean when you use the words "more powerful".

The point of static type-checking is to make sure (at compile time) that programs don't go "wrong", which is - if you think about it - a tremendously neat idea. That means that you spend an awful lot less time on tracing common bugs (typically more than 50% of an engineer's time is spent on testing and bug tracing). Instead you focus on "interesting" bugs, i.e., bugs that occur due to algorithmic problems or specification problems.

The price for this is that some well-behaving code won't type-check, and you'll have to find workarounds. This used to be quite difficult, but with newer language and typing features, these cases seem to me to be less and less frequent.

Finally, having a type-checker and a type-inference engine in the compiler means that you can "generate" code depending on types and their relations. It could be seen as having an advanced macro language based on Prolog at your hands, and this in itself makes it possible to describe invariants in your problem domain that is enforced statically (at compile time). This is something that is rather hard to do in dynamically typed languages, and this is a reason why some static type-proponents feel that statically typed languages are more expressive than dynamically typed ones.

- PKE.

Other People's Programs

Being in the static typing camp, I agree. And I would like to point out that being in the static typing camp makes much more sense when you are a consumer of either library code or large applications, and especially there are potential security concerns. To me, I tend to lend more trust to programs written in statically typed languages because if the program passes the typechecker, then it is free from a number of potential dangerous runtime behaviors on my machine. I simply don't trust other people to test their software well. When you give me your code, I want to run a typechecker on it before I ever attempt to execute it.

I have often found the the crux of the argument between static typing and dynamic typing lies squarely on how much and how routinely one uses other people's code (meaning both running and developing against). It is also closely related to the scale of the software program written. Large programs tend to benefit more from static types than small ones.

When you are writing your own programs, from scratch, or nearly from scratch, one often gets the sense that the static type system is restrictive and "gets in the way". This is because it seems to always prevent you from expressing your ideas as they spurt forth malformed from your brain.

But when you have to often use the result of other people's work, other people who either unintentionally or intentionally make mistakes, it makes a big difference when there are machine checkable safety and well-formedness properties that are nontrivial. This guides integration of software and increases its robustness (and performance). If you were a consumer of database data, would you prefer one that had input sanitation and automated quality tests, or one that didn't? If so, then why would you want to run programs written in dynamically typed languages that don't have any static guarantees?

I, for one, love when computers help me make software better. And I love it when they help defend against malicious and broken software, too. I am not a fan of consuming code that was produced in a hurry. I actually like the idea that static types make you stop to think about how to make the code correct! After all, in the typical life of any given program, only a vanishingly small amount of development time is actually spent writing the code. More time should be spent in design, but inevitably most of the rest of the time is spent debugging and maintaining it. And time spent by users actually trying to use the software dwarfs developer time. If your mistake costs 1000 users 5 minutes, then it is far worse than costing yourself 50 or even 500 minutes making your program typecheck.

I don't want to overgeneralize and caricature here, but I often get the impression that dynamic language proponents do little design up front. Maybe I'm wrong, but if you are spending the time to come up with a good software design, isn't likely that the code that that design inevitably gives rise to will be very easy to make statically type safe?

I would prefer to use a

I would prefer to use a database in an untyped way (dynamically typed pickling like JSON). Once my program design settled, I'd like the database to tell me what the common structure of recent data is and allow me to refine my (or others) programs. For example, I'd like to provide type-filtered views on a web service, which would support evolving the schema over time without destroying legacy consuming software. As it's my own database, I'd like to be able to bend the rules in prototype phases -- even if it's years down since deployment. As another example, what if I wanted to add data to an existing shared database, but have extra stuff in it?

Essentially, I agree with types as guidelines, but not as mandates. I disagree with lots of other parts of your argument, but am done procrastinating for now, and this has devolved into another false dichotomy within the static vs dynamic typing debate :)

Security

I would like to point out that being in the static typing camp makes much more sense when you are a consumer of either library code or large applications, and especially there are potential security concerns.

That seems a bit overstated. Outside of research contexts, you don't find too many languages that use the type system to establish many security guarantees. Most mainstream security work has been in the runtime through sandboxes, access control lists, etc. Pick your favorite mainstream language and libraries are not statically proven to be free of bad behavior like file system corruption, network spying, etc.

It seems like right now the level of mainstream involvement in security via the type system is the use of types as witnesses that certain dynamic security checks have been performed on a payload so that it can be treated as "clean" without further checking. For instance, you might wrap a String in a SafeString after dynamically establishing that it was free of injection attacks.

Also, note that the most prominent capabilities based language, E, is dynamically typed. Certainly that's a language that takes security seriously.

Perhaps my original

Outside of research contexts, you don't find too many languages that use the type system to establish many security guarantees. Most mainstream security work has been in the runtime through sandboxes, access control lists, etc. Pick your favorite mainstream language and libraries are not statically proven to be free of bad behavior like file system corruption, network spying, etc.

Perhaps mainstream security work is focused on dynamic checks because the intersection between static type system experts and security researchers is very small. Or it could also be that the bar to entry is much lower for dynamic checking (i.e. does not require compiler work). People tend to be very conservative about changing mainstream programming languages in general, even more so when it means adding complexity to the type system.

It seems like right now the level of mainstream involvement in security via the type system is the use of types as witnesses that certain dynamic security checks have been performed on a payload so that it can be treated as "clean" without further checking. For instance, you might wrap a String in a SafeString after dynamically establishing that it was free of injection attacks.

Sure, static types are useful in separating, for example, verified data from unverified data by making them unrelatable types with the only the "verify" function between. But certainly there are opportunities for adding other dimensions to the types by adding an effect system or even a dependent type system.

Also, note that the most prominent capabilities based language, E, is dynamically typed. Certainly that's a language that takes security seriously.

I don't know much about E specifically, but in general implementing experimental languages is much easier when the language is dynamically typed, especially when performance is not much of an issue. I get the impression that implementing a sound static type system looks like grunt work to people who are primarily not language implementors. Is there something in particular about E that would preclude it being statically typed?

Static typing vs bugs

Static typing currently doesn't catch non-trivial bugs.

Trivial Bugs

Static typing currently doesn't catch non-trivial bugs.

If static typing can catch it, then that bug becomes 'trivial'?

Static typing can catch

Static typing can catch arbitrarily complicated bugs with the extreme case leaving only specification bugs which are inherently uncatchable by a formal method. Actually, it can catch some of those if, for example, the specification is self-contradictory. Paul Snively can come in with a list of systems that catch bugs most people don't consider trivial such as deadlock.

The more interesting the bugs are that you want to catch, usually, the more you need to specify. Most extant statically typed, general-purpose programming language tend toward not requiring much specification, though many do allow significantly more information to be provided and thus more interesting bugs to be catchable.

A type system is interesting

A type system is interesting if it rejects enough programs that contain errors but allows enough error-free programs. So two important questions are:

Which deadlock-free programs don't typecheck? (if any)
Which programs that contain deadlocks do typecheck? (if any)

Your Point

Your point appears to me to be that assembly language gives you "more" than a typical higher-level compiled language. You claim the logic I'm using to define "more" implies that.

Yes, I agree. That's right.

Please read ran's comment, above, with the subject "Please read this if you want to contribute to this discussion."

I don't mean that to censor you or accuse you of not contributing but simply to point out the sense of "more" that was raised here. There seems to be a lot of confusion about that which is understandable since we're trying to point to a definition of "more" informally. A useful definition of "more" in some contexts but not the only possible one.

As for the guy with the bag of transistors: he should let me know his price and turn-around time, if we're talking business. If we're talking business strategy however, I'm interested in both his in-principle range of expressiveness and his cost structures as they relate to closing in on that limit.

-t

Back to the discussion

I'm afraid we steered away from the issue at hand, and I am sorry for that. So, let's get back to the discussion.

First assumption you have is that it is possible to implement a static type-checkers job as a macro-level program. How is that possible if the core language is dynamically typed? In order to reason about types you would need to know something about your values, no? Perhaps you meant that the macro-language was dynamically typed?

Obviously, you can implement a compiler/type-checker in any way you care, perhaps even through your macro-language - but I fail to see how you can get type information out of thin air. In a simple function such as "def f(x) = x + 1", how do you deduce the typing if x can be any value? And given g : int -> int, how do you enforce safe calls to it if you know nothing about the values (except at run-time)?

PKE.

See Typed Scheme

All of these are perfectly possible. See Typed Scheme for a demonstration. In particular, the paper "Advanced Macrology and the Implementation of Typed Scheme" has many details on this topic.

I'm not so sure d/r can one

I'm not so sure d/r can one up s/r in this way. One can readily embed a dynamic language in a statically typed environment. The expression type one operates on in the embedded interpreter is simply the dynamic language's dynamic type.

re: I'm not so sure d/r can one up s/r this way

In my opinion, that's a "nice shot" but also a "narrow miss". That problem, precisely, pre-occupied me before making my post and I almost posted instead about how d/r and s/r are equal.

But no:

In my d/r, I can add optional static type checker after optional static type checker. I can run code passing muster with any one of them or with any ad hoc blend of them.

In my s/r, I can do the same thing but with a catch. First, I must embed a d/r sub-environment in my s/r. And then I can "do the same thing" by sticking to that d/r sub-environment.

Now, is there some "meta-type system" that would meaningfully give me an s/r whose rules were as extensible as they are when using static checking in the d/r environment but without having to "route" all of that extensibility back through the d/r?

-t

Opinionated PLs are the way to go

Setting aside questions of which form of type checking is best, I'm one that believes that programming languages should have an opinion about how programs should be written. Sure, we can theoretically say that it's possible to embed a static language using macros. But then we have to worry about issues such as standardization and libraries. So, I'd rather have a language that proposes a standardized way to do typing - static, dynamic, or in-between, rather than leaving me to my own devices to constantly be authoring a PL. That leaves me free to criticize their choices.

Though it's not really relevant to the OP question, I'm also one that likes (and dislikes) both static and dynamic typing. Because I am both producer and consumer of software, and I wear many hats, I find power (and frustration) in both. I'd like a language that is very static (ML or Haskell) with one that is very dynamic (Oz or Scheme). But the PL would need to be opinionated about how to strike that balance (as I'd rather not have a balance that is fluid).

differences

I'm one that believes that programming languages should have an opinion about how programs should be written.

I think that is the real d/r vs. s/r split. I'm of the opposite opinion and my rationale is in two parts: (a) I don't like tools that comes with arbitrary restrictions on their use; (b) I see by existence proof that all those restrictions we are talking about can be lifted in a d/r without losing much or perhaps nothing.

In the way I think you are using the word "opinionated" I guess I would say that I think language A is "more than" language B if B expresses a singular opinion but A is more like an expressive medium for opinions as they relate to specific problems that programs are meant to solve. Or, putting on a silly hat: d/r is the ultimate and all the rest is just a question of optimization.

-t

RE: d/r is the ultimate

d/r is the ultimate and all the rest is just a question of optimization

... of optimization, interoperability, reuse, certification, maintainability, and any other non-functional requirement you might care to name.

The more a language pushes you towards implementing your own solutions for cross-cutting concerns (concurrency management, logging, type systems, frameworks, reactive programming, etc.), the less interoperability and reusable code you'll usually end up with.

In any case, I believe 'opinionated' isn't restricted to restriction: there is a great deal of language-culture involved. Both "There is more than one way to do it" and "There should be one (and preferably only one) obvious way to do it" are mottos of dynamic languages (Perl and Python respectively). Python would generally be called a more 'opinionated' language than Perl, but it would be difficult to claim it is restricting you 'arbitrarily' or losing much of relevance by doing so.

In my s/r, I can do the same

In my s/r, I can do the same thing but with a catch. First, I must embed a d/r sub-environment in my s/r. And then I can "do the same thing" by sticking to that d/r sub-environment.

I don't see how the reflective layer would be so limited by the static typing that it can't implement the same sort of metaprograms that are implemented for Scheme programs.

Only dependently typed languages can permit a fully static proof of type safety for any other language, but any statically typed language can implement any other statically typed language, even more powerful ones, given runtime checks in place of type soundness.

If these runtime checks can be eliminated via a reflective metaprogram which implements the type system in a similar way that the type system would be implemented in the d/r, I don't see how the d/r could possibly be more powerful.

At the very least, we simply embed Scheme, then implement the reflection over embedded Scheme programs, but I don't think it would be necessary. So as I said at the outset of this thread, I think d/r and s/r are practically equivalent, and I think adding reflection to both keeps them in lockstep, where neither is more powerful than the other.

You claim

In my d/r, I can add optional static type checker after optional static type checker. I can run code passing muster with any one of them or with any ad hoc blend of them.

I'd really like to see the proof for (and clarification of) this "any ad hoc blend" claim.

"one up" and one down

you show me a static/reflective language, I can in principle always produce a type-sound program which runs in my environment but which your static type checker can't handle

Agreed. For any moderately complex type system, this is proven by Goedel's incompleteness theorem. No matter how many axioms the static type system contains, there will be a program that is well typed but for which the proof is undecidable.

That said, producing such a program might be more convoluted than desirable. I don't believe you can reasonably claim this as an argument against s/r unless you can assert a much stronger claim: that the d/r language can provide a well typed program that is a more direct or more natural (or by some other metric more optimal) solution to a problem than the s/r system allows.

It may be this stronger claim is also true, seeing as discussing properties of the programming solution (via such things as manifest types) are rather 'indirect' approaches to expressing a solution. On the other hand, expressing unit tests falls into the same 'indirectness' category, and may (at least in practice) be necessary for dynamic solutions.

And 'Greenspunning' - by which I refer to implementing one language within another - is extremely indirect (though the host language can make it more or less 'natural').

in my dynamic/reflective environment I can always implement your language and its type checker as modules

Agreed. But that sort of Greenspunning goes both ways. Any Turing complete language is capable of implementing any other language.

The question becomes how well you can fold the advantages of the language back into the regular programming environment, how much the efficiency burden from that additional layer will cost in terms of speed and runtime support, and how difficult it is to implement, use, reuse, and maintain the language.

Dynamic reflective languages are distinctly the more powerful category.

The power of a language usually refers to its ability to express solutions to automation problems where these solutions must further compete in terms of non-functional requirements including cost, programmer time, execution speed, correctness, maintainability, portability, memory footprint, realtime support, reuse, certification, etc. There is no well defined system of 'weights' for the relative importance of these non-functional requirements, but they do need to be considered in claims of 'power'.

Even expressive power is subject to the above generalization: you need to discuss it in terms of meeting non-functional requirements - usually direct or natural expression of concepts that are usefully processed by the language (i.e. excluding comments). I'd loosely define 'direct or natural expression' as one that excludes (via isolation or abstraction) implementation details that aren't critical to the problem the programmer is solving.

Nothing you have asserted above provides reason to believe dynamic/reflective offers any real power advantage over static/reflective.

Of course, the same can be said in reverse: none of the above arguments inherently favor s/r over d/r.

No, what we've been discussing is ability, which is about functional requirements. For example, to the best of our knowledge, all Turing complete languages have the same computability (modulo communication). Absent advanced compile-time meta-programming, typed languages have better ability to express certain ideas, especially expressing those ideas regarding properties of the programmatic solution itself and of expressing temporal properties about when these properties are demonstrated (compile-time vs. runtime).

One thing to consider is the enforceability of the dynamic/metaprogramming solutions and whether they can be defeated by external dynamic modules mucking around with meta-object protocols or features of the dynamic language runtime. I expect one would need to provide considerations for enforceability when designing the metaprogramming/reflection facilities, at least if this was a desired property.

So d/r can always "one up" s/r and s/r can never "one up" d/r.

This might be true...

...so long as you have powerful and well designed metaprogramming facilities designed for static operations and enforceability, and you further ignore every non-functional requirement up to and including the ability to quickly get the program correct in a maintainable or comprehensible manner.

But that's both assuming and ignoring an awful lot. I really think we (you or any of us, really) need to prove stronger claims if we're going to say claim that one language design has 'one upped' the other or is in a 'more powerful category'. Either that or we need to keep the scope on our claims very limited and very clear. It is too easy to play Humpty Dumpty with language and mix some connotation into our 'personal' denotations for such things as "one up" and "more powerful".

type system/language dichotomy considered nonsensical

I am really not entirely sure about the whole 'some well behaved programs wont type check' thing. More accurately, some expressions of a well behaved program are invalid when you stop ignoring static type. That just leads one to express the program differently, when a static type system accounts for some significant part of the expression medium. While it is easy to understand static type systems as simply being code validators, one could say the same thing about all compilers and interpreters in general. It is not really the case though.

The programs in question, really aren't programs at all if they do not adhere to the rules of the language they are expressed in, whether or not that language includes the concept of static type or not. And if they can behave well, at the same time not having to adhere to any sort of language specification, making them unparsable, precluding any sort of behavior at all, I am astounded.

Maybe I am just suffering from a lack of priori in my understanding of 'program' and 'well behaved'. I think its more likely though that it is 'generally understood' that languages and their type systems are always exclusive concepts, and I do not generally understand how so many people have come to this conclusion.

As for the type soundness of dynamically typed programs.. You can, in principle, produce anything you'd like without verifying it in full at some point or another. You cannot, however, verify the type-soundness of programs without verifying their type-soundness. So even if your type-sound dynamically typed program exists, you still have to verify it... (i imagine statically) to even approach credibility. In which case I would just use whatever method you used to verify your dynamically typed program to statically typecheck mine.

Six of one, half dozen of another.