Software Engineering

Generics: The Importance of Wildcards

Martin Bravenboer writes about generic wildcards in Java, and concludes that it is unfortunate that C# will not support wildcards or a similar mechanism.

Eric Gunnerson from Microsoft replies.

I was originally a type-erasure fan, but these days I am not so sure. I hope this turns into a fruitful discussion that helps me decide...

P.S

The Java paper was mentioned on LtU before.

Jon Udell: Tools for dynamic languages

I do think that professional tools can help dynamic languages consolidate the ground they've been gaining. And to that end, conventional capabilities -- such as suport for testing, debugging, and version control -- will be prerequisites. But dynamic languages really are different in important ways, and their tools should be as well. It's time for some fresh thinking on this topic, and for some new approaches.

Jon also mentions DSLs, stratified design and Patrick Logan (who used to post here more often, and will be welcomed back should he choose to return...)

Parameterized Unit Tests

Parameterized Unit Tests. Nikolai Tillmann; Wolfram Schulte; Wolfgang Grieskamp

Parameterized unit tests extend the current industry practice of using closed unit tests defined as parameterless methods. Parameterized unit tests separate two concerns: 1) They specify the external behavior of the involved methods for all possible test arguments. 2) Test cases can be re-obtained as traditional closed unit tests by instantiating the parameterized unit tests. Symbolic execution and constraint solving can be used to automatically choose a minimal set of inputs that exercise a parameterized unit test with respect to possible code paths of the implementation. In addition, parameterized unit tests can be used as symbolic summaries which allows symbolic execution to scale for arbitrary abstraction levels. We have developed a prototype tool which computes test cases from parameterized unit tests; it is integrated into the forthcoming Visual Studio Team System product. We report on its first use to test parts of the .NET base class library.

By adding parameters the authors turn a closed unit test into a universally quantified conditional axiom that must hold for all inputs under specified assumptions. Adding parameters improves the expressiveness of unit cases, at the expense of not providing concrete test cases. Symbolic execution is used to automatically find the set of parameters that have to be tested.

This seems like an interesting approach to unifying unit testing and algebraic specifications.

Spec#

Spec# is an extension of C#. It extends the type system to include non-null types and checked exceptions. It provides method contracts in the form of pre- and postconditions as well as object invariants.

The Spec# static program verifier. This component translates Spec# programs into logical verification conditions. Internally it uses an automatic theorem prover that operates on the verification conditions deduced from the Spec# contract. An interface to the Spec Explorer tool for test generation and model-based testing.

A unique feature of the Spec# programming system is its guarantee of maintaining invariants in object-oriented programs in the presence of callbacks, threads and inter-object relationships.

Spec# (also "specsharp" for search engines and the like), is now available for download. The home page gives a list of related publications.

AOP blog and aosd discussion

If you haven't seen the AOP blog you might want to check it out.

Via the blog we find the AOSD discssion titled AOP considered harmful which, not surprisingly, is quite heated :-).

It has been awhile since we discussed AOP, but I know many LtU regulars are skeptical about AOP, as am I. It's interesting to look at the discussion from the other sides point of view.

Relating FFTW and Split-Radix

Relating FFTW and Split-Radix. Proc. of ICESS'04, the First International Conference on Embedded Software and System, December 9-10 2004, Hangzhou (Zhejiang), China.

This ongoing attempt to reproduce an efficient implementation of FFT using staging and abstract interpretation attempts to answer the question "How can we get the raw performance of hardware without giving up the expressivity and clarity of software?"

Here's how Oleg describes the contribution of this paper,

One may think that generating truly optimal power-of-two FFT is straightforward: we generate the naive radix-2 FFT code, and then optimize it, removing trivial multiplications (x*1), trivial additions (x+0), etc. That was the approach demonstrated previously.

And the bottom line is: the improved code is better than the original, but still worse than the best (FFTW) code. Here's why: if we consider an expression, (sin(pi/4) * x + cos(pi/4) * y), we may think that it can be optimized to sin(pi/4)*(x+y) thus saving one multiplication. Alas, computed sin(pi/4) and cos(pi/4) are not _exactly_ equal. Therefore, no optimizer is able to detect that they are the common factor. The previous paper BTW did handle the case of sin(pi/4) -- and still fell short of FFTW. To achieve optimum, more complex trigonometric transformations have to be applied.

This paper shows that to achieve the best quality of the generated code, we have to use domain knowledge. In case of FFT, we use the fact that it is a linear transform whose factors are roots of unity. This gives us an abstract representation of terms amenable to exact trigonometry. We can essentially symbolically manipulate the terms, and then concretize our abstract representation into the code. In the end, we generated FFT code that exactly matches the number of FP operations of FFTW. Somewhat unexpectedly, when we changed the function that generates the code for general complex multiplication, we obtained `split-radix FFT' -- another well-known FFT technique.

Oleg points out that the point isn't that they managed to reproduced the FFTW results. The crucial point is that we know exactly which identities (i.e., axioms) contributed to the optimum. The search was principled rather heuristic, and the code is generated in only one pass. There are no manipulations on the code: it is generated just right.

The Glasgow Haskell Compiler Survey - GHC needs your feedback!

If you're a GHC user, the Glasgow Haskell Compiler HeadQuarters needs your feedback! See Simon Peyton-Jones original message, or go directly to the user survey. Here's a quote from the original message:

We'd like to hear from *absolutely everyone* who uses GHC: whether you
are a first-year undergraduate or a famous professor; whether you use
GHC for industrial applications or recreation; whether you use
higher-rank polymorphism or have only just learned what functional
programming is. Everyone!

We'll use the information to guide our future priorities, and we'll
publish some kind of summary of what we learn in due course. It's all
anonymous, of course, unless you choose to say who you are.

Pugs, Practicing the Theories.

A lot of language theory goes past here on Lambda the Ultimate, but we rarely see that theory directly impacting commercial programmers.

I'm a great fan of theoretical concepts like arrows, but at the same time I'm a self-employed programmer interested in solving my clients' problems.

Pugs is notable in that it profitably uses recent developments such as GADTs and Template Haskell for an implementation of Perl6.

I recently became a regular on the #perl6 irc channel and soon after joined the list of committers.

In just a few days I've seen a lot. I've seen enthusiastic members of the Perl community learning Haskell. I've seen myself learning Perl. I've also seen how daily Perl programmers work with abstractions like monad transformers. I've seen how some structures are easy to extend for programmers new to both the Pugs codebase and Haskell.

The Pugs project was started 64 days ago by Autrijus Tang, as an exercise while reading TaPL. Pugs already includes network and threading primitives. New tests and code are add at an amazing rate, as evidenced by the smoke tests.

I don't know if I'll end up using Perl after Pugs is written, but I am learning how to practice the theory of programming language design and implementation.

Grady Booch: AOSD keynote

The slides from Booch's AOSD keynote The complexity of programming models are online.

If you have been following Booch's recent presentations, you'll recognize many of the slides. Only towards the end does AOP come up.

Relate to PLs are the slides about simplicity in languages, programing models, the discussion of abstraction, and of course AOP.

The presentation was already mentioned in the LtU forum.

Inside Software Factories

This interview with Steve Cook is hepful in trying to understand what's behind the hype about software factories and DSLs.

[a domain specific language] tends to look like boxes and arrows on a screen. It’s a graphical modelling language with metadata underneath; and very importantly, metadata that’s domain specific. So if your domain is that of a business process, then the metadata is recognisable XML that describes business processes with tags that will say ‘process’ or ‘activity’, or whatever. It will be customised to your job.
XML feed