> Is this directly related to programming languages?
What he finds disagreeable in the faculty's candidate's research
(on Fortran optimization) certainly is.
It is the tendency of some programming language research
to hide a useful idea behind a buzzword or concept or gadget or language
that prevents you from thinking anymore or allows you not
to think anymore or that forces you to think only in terms of their buzzwords.
An intimate relationship with some company or the DOD as
well as non-disclosure agreements are a likely source of this behavior.
In any science including programming languages the problem and its solution
have to repeatable, otherwise the work is unfalsifiable (Karl Popper)
and not really science at all.
Open sourced code or detailed specs (and test cases that show the problem being solved)
allow a problem and its solution to be duplicated.
Instead of hiding the ideas in a black box that allows you to be an unthinking
consumer, the ideas should be put it in a white box (or at least grey box) where as well as
consuming them, the ideas can be understood, reused if need be, or extended if need be.
(Dijkstra: "...his gadget worked in such a small set of circumstances..." vs.
"new insights").
Dijkstra's objections to the unthinking nature of the candidate's research
bears great similarity to the limitations of black box abstraction
(Dijkstra: black box = "apply it unthinkingly" = "final product" = "fully automatic"):
"...the well known failures of black box abstraction
in component and distributed systems. These include the loss of rationale,
invisibility to the client of critical implementation efficiency issues, and the
inability of the client to specialize any behaviors that cross-cut the natural
component breakdown of the system."
(Gregor Kiczales,
Beyond the Black Box: Open Implementation",
IEEE Software, January 1996).
Quoted from
Toward a Unified Language Model
(John Hotchkiss, Group Manager, Harlequin Inc.):
...debuggability, and most importantly reusability, will only be achieved by a combination
of advances in software technology. These include better evolutionary architectural
specification, run time safety and verification, and developing support for abstractions
that span the component segmentation and allow more explicit control over the tradeoffs
of component reuse.
It is this last area that I am most interested in: how to provide the
benefits of abstraction in building dynamic applications, applications that will have to
evolve over their lifetime to support new substrates and functionality, while still allowing
the abstraction layers to be selectively flattened to improve performance and to give clients
control over performance and the meta-interfaces of a component.
This is a sort of fish net abstraction, as opposed to black box abstraction. Information
about the component or distributed object can be exposed to clients, and clients can
pick and choose the level of dependency on the available internals they are willing to accept.
For me the beauty of Scheme, SICP, and EOPL is their transparency.
Everything is transparent in Scheme:
the ideas behind the interpreter, compiler, parser,
(soft typing and lazy evaluation systems also) are all
there written in Scheme for you to see, understand, and build on.
The core of the language defintion is also transparent.
You can look at the s-expressions before and
after macro-expansion.
Partial application of functions
takes place on lambda forms that you can
look at before and after.
PLT even has macros that allow you to trace the macros expanded
back to the original source
and it's all there, transparent,
for you to read, understand,
and build on, rather than re-invent the wheel yet one more time....
which seems to be the real story behind programming languages and their products,
computer programs, to date.
|