Rapid Prototyping tools & environments from academia

Going into 2010, what are the most interesting rapid prototyping tools & environments coming from academia?

What I am aware of:

I am sure there are plenty of things I've not tried yet.

I figure there is no harm asking for a brain dump of what the rest of LtU is using, interested in, and/or learning about.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Does Scratch qualify?

Does Scratch qualify?

As you wish

I am personally most interested in prototyping tools that allow manipulation and extension of program syntax. Bonus points if it is based firmly in logic, as Maude is.

However, things like Reuseware, which was born from things like E-CoMoGen and CoSy, are mainly composition systems and not really about formal analysis. My interest in Reuseware is within the context of a reflective term rewriting system, perhaps like Maude.

Edit: One thought I had when I wrote the post was whether or not Coq was a "rapid prototyping tool". I have unfortunately never used Coq or read much about it. You need to learn to crawl before you can walk, and so I have tried to master specification and execution from within Alloy and Maude. Actually, I will add Alloy to the list, as it is excellent for rapid prototyping behavioral specifications.

Thanks

whether or not Coq was a "rapid prototyping tool"

That one made me laugh :)

Why?

J/w...

Just that most of the ways

Just that most of the ways I've seen Coq used, I would characterize as the opposite of rapid prototyping.

I would too

However, Coq from the demonstrations I've seen of it can remove a lot of boilerplate when trying to prove a theorem. Also, any language with a WHY feature is capable of explaining to you what you just told the computer (to do), even if you didn't understand the results.

I do have Yves Bartot's book on Coq and theorem proving, but I am not even half way through it. -- I bought it in August and am logjammed with other stuff to read.

I've found 'rapid

I've found 'rapid prototyping' tool to be a nice way to say 'incomplete library', 'not reusable', 'bad if any FFI', etc.

The only positive features I can correlate are potentially a GUI/IDE and typically dynamically typed and GC'd. Perhaps domain specific, but that opens a can of worms.

As for syntactic manipulation, lex/yacc (and more likely the OCaml extensions by the whole GC argument). I've struggled with ANTLR in the past usability-wise, but it's support for lexing+parsing+visitors+multiple backends and now tree transforms make it sound like the best kid on the block.

Mindset is important. Rapid

Mindset is important. Rapid prototyping allows you to create disposable and incomplete code to evaluate and iterate on design ideas. The key here is "disposable", because if your developers get too attached to the code, they will not want to throw it away and your design can't evolve away from it. "Incomplete" is also important, you don't want to build anything if its not important in evaluating the design, so we also use lots of fake back ends and stub generators.

You also want to ensure that your stakeholders know something is a prototype; Expression Blend Sketch Flow uses a special paper-based look and feel for UI prototyping for that.

Can you imagine any system

Can you imagine any system that is purposefully inconvenient to develop in and is against iterative development? And where that isn't considered a weakness? Even in verification, approaches like iterative refinement are valued. Rapid prototyping tools, for the most part, is about what an incomplete tool seems to be defined for and how the designer apologizes for this incompleteness. Not supporting prototyping / iterative development in a mature system seems like a bug (in general, but not always).

As for making throwaway code... that's like UML: ultimately, if you have a specification / communication medium, folks *will* generate code from it and build real systems based on it. I've mentioned this here before: I've seen an actual factory take input from a system of Excel spreadsheets.

Acknowledging a weakness

Acknowledging a weakness does not fix or excuse it. An environment for rapid development ought to be superior for that task.

And to answer your first question: yes, people do purposefully make languages and systems that are inconvenient to develop in. The intention is usually to achieve a trade-off for another property, of course, but the harm against productivity or iterative development is often anticipated in advance, and the decision still executed with purpose.

Can you provide concrete

Can you provide concrete examples? My intuition is that whenever a language designer makes such a tradeoff, they also view it as a bug to be fixed.

Design Trade-offs

Not all trade-offs are 'bugs' that can be fixed. They are, simply, trade-offs.

Java's checked exceptions are one example, forcing a lot of boiler-plate and inconvenient hacks to integrate behavioral abstractions (such as visitor pattern) in order to presumably gain some measure of safety.

Automatic variable declarations are another common trade-off; they introduce much greater risk for spelling errors, but are often more convenient for hacking out a quick function.

A lot of language-designers get it into their heads that design is a zero-sum game, that it's all about trade-offs. This is unfortunate, for it blinds them to a simple truth: they are always free to intentionally do much worse by the metrics they favor, and therefore there must also exist the possibility of doing better by the same metrics (though discovering new low-hanging fruit in the design-space will often prove a challenge; OTOH it seems to me there are some huge orchards of low-hanging fruit in effect typing or layered languages, dependent typing or partial function typing).

It seems you wish to say that, had the designers known of solutions that provided the same features without imposing the inconvenience on the users, they would have chosen it. Even this is not true, for many such solutions mean inconveniencing the implementors (e.g. by requiring a theorem prover, or by requiring a graphical IDE, or by requiring concurrency control or allocation or linking or GC or persistence features not well supported by the favored implementation languages and OS) and sometimes ease of initial implementation is given greater priority over convenience for using the language.

Thus many language designers make these trade-offs. They do so with awareness of what they expect to lose and what they expect to gain. They do so with purpose - purposefully, intentionally, not accidentally - even if not with desire.

Don't make trade-offs, solve the problem instead

A lot of language-designers get it into their heads that design is a zero-sum game, that it's all about trade-offs.

This is wrong, of course. There is a theory of invention called TRIZ that has an elegant solution to the problem of trade-offs. Whenever there are two opposing forces in a design (which they call a technical contradiction), then there is an opportunity for invention. The right way to solve the situation is not to make a trade-off (which is bad for everybody), but to introduce a new idea that makes the contradiction go away. Genrich Altshuller, the founder of TRIZ, has studied many pairs of this kind and has summarized the results in a Contradiction Matrix. I can give an example from programming languages. We would like to increase the reliability of programs, and yet not increase their complexity. How can we do it? For this particular pair (increasing reliability while not increasing complexity), one of the solutions given in the Contradiction Matrix is "Divide an object into independent parts". This insight can lead one to a design like Erlang, which is made of independent parts.

Technical Contradiction is Opportunity for Innovation

Well said!

I had not heard of TRIZ before now, but this principle (though I have never before seen it stated so clearly) is one that has guided me for a long time. I am enough of an idealist/perfectionist for my hobby work that I find compromise and trade-offs quite irritating, so I'm always looking for designs that solve many problems at once.

Unfortunately, language-design involves managing many more than two forces at any given time. In addition to reliability (robustness and resilience) and complexity for users, you've got composability and extensibility, upgradeability, performance and optimizability, real-time and space limitations, resource management, implementation complexity, authority, secrecy, concurrency, decentralization, consistency, accessibility, searchability... etc. perhaps not all at once, but often many at a time.

I wonder if the Contradiction Matrix should be three or four dimensional...

Anyhow, I'll be taking a look. Thanks for the reference.

i'm also curious

Reference for TRIZ

The best reference I know about is the book The Innovation Algorithm: TRIZ, systematic innovation and technical creativity by Genrich Altshuller. The origin of TRIZ is a typically Russian story. Altshuller came up with the basic insights while interned in a Stalinist concentration camp. Many of his fellow prisoners were scientists, and Altshuller suggested they create a kind of University with just one student (himself). So he got a lot of education, and his fellow prisoners' morale was boosted. After he was released, he started reading Authors' Certificates (the nearest Soviet equivalent to patents, but of course unusable for anything more than getting credit for an invention). By classifying and generalizing the ideas in these inventions he started making TRIZ into a realistic discipline. Most of the examples in Altshuller's book are based on physics and chemistry. It would be real interesting to apply the ideas to computer science. A perfect topic for a Ph.D. thesis!

RE: A perfect topic for a Ph.D. thesis!

A perfect topic for a Ph.D. thesis!

How would that work with programming language theory?

For the example you gave:

Strengthening feature = Reliability
Worsening feature = The Quantity of the Substance (Complexity)

In order to make a program more reliable (say, trustworthy, when translated into a distributed computing environment), we may need to account for partial failure. The sort of trustworthy machinery required to make this work falls under the "mechanics substitution" (Principle 28), whereas your "Divide an object into independent parts" falls under Principles 3 (Local Quality) and 40 (Composite Materials).

I'm asking, because I don't have experience using TRIZ, just reading about it. I think I first read about it in the References section of your book, CTM, years ago. I know I purchased many books you cited in the References section, such as Object Lessons. To me, "Divide an object into independent parts" only allows for limited innovation (Erlang).

Another observation here would be that TRIZ does not account for strengthening certain ideals mostly known only to intangible, ephemeral goods like software, such as trustworthiness.

Those are the two major problems I had mapping TRIZ to software. (1) mechanic substitutions dealt with physical discipline, rather than arbitrary model interactions. (2) missing ideals

Side note: It also seems you characterize "complexity" as 'shape', whereas I characterize it as quantity; increasing the number of components increases the state-space of a program. I think this is a much better mapping.

You should try working in a

You should try working in a design studio someday...

There is agile iterative development of course, but this is only iterative in the sense that the solution strategy evolves as you build more of it. Before that occurs, the product design requirements are hopefully already known. To come up with those requirements, designers talk to users, construct wireframes and all sorts of prototypes. Some of them are really low fidelity; e.g., they are done in paper or in powerpoint. There is definitely a need for low fidelity software prototypes, and the technology you use to construct these can/should be different given the completely different priorities. Its a good idea to put up as many barriers as possible to prevent your prototypes from seeping into your products.

Exploratory programming is also considered prototyping, though you might not be exploring a particular design. My views on exploratory programming are well known.

In general, its a good idea to separate out prototyping and production activities, even if the same person is doing both. Anything else is very dangerous.

As for syntactic

As for syntactic manipulation, lex/yacc (and more likely the OCaml extensions by the whole GC argument). I've struggled with ANTLR in the past usability-wise, but it's support for lexing+parsing+visitors+multiple backends and now tree transforms make it sound like the best kid on the block.

I'm curious, what were your pain points with ANTLR?

I personally like the ideas behind CoSy and its Engine Description Language (EDL) DSL for configuring how compiler components should interact. It is very different from you traditional Unix model of pipes and filters operating on unstructured byte streams; it uses a repository-based architecture where components define and expose 'views' to other components. I'm wondering if there is an even better approach than the repository model? I've used the Windows PowerShell "object-oriented shell" model for connecting components for over a year now, and it is basically limited to a scripting language and has one or two gotchas in the language's syntax, where the syntax is just different enough from other languages to be annoying usability-wise. e.g. defining functions uses parentheses vs. using functions uses spaces to delimit function parameters, and any use of parentheses creates an array literal. Unlike BeanShell, methods can be stored in a place other than global scope, via the use of "device drives" and "folders", much like the Unix resource model. PowerShell's composition system thus seems to be in the middle between Unix and CoSy.

In my books, most parser generators lack the ability to extend the grammar at run-time. This is what appears to set Lisp and Forth apart from the others I have mentioned. CoSy comes close, but does not seem to have the notion of hotswapping, which is a nice feature for rapid prototyping.

{Methodology} /\ {Tool, Tutor, Tutee} = 0

Folks,

I think we should look to basic roots to understand what a rapid prototyping tool gives you, and also the degree to which in can do things to actually rapidly prototype ideas.

In my mind, exploratory, evolutionary programming as well as iterative and incremental analysis, design and implementation, can all best be viewed as viewing the computer as:

  • Tool
  • Tutor
  • Tutee

This framework is explained in detail by one of the seminal books in the field of computer-assisted instruction: The Computer in the School: Tutor, Tool, Tutee.

What I most frequently aim to do with prototyping could usually be replaced by better vendor documentation. However, coming into a new product for the first time is quite like stepping into a new classroom on a new subject matter for the first time. The approach you take is methodology-agnostic. In fact, if you read the above link book, you will note all the major visionaries of this field agree that most criticisms of the field are born out of methodology concerns, and not tutor, tool, tutee concerns.

Some methodologies do recommend coupling prototyping with production, but they refine the notion of prototype, as discussed in The Pragmatic Programmer:

  • Tip 15: Use Tracer Bullets to Find the Target
  • Tip 51: Don't Gather Requirements--Dig for Them

and the following checklist:

Things To Prototype Pg 53

  • Architecture
  • New functionality in an existing system
  • Structure or contents of external data
  • Third-party tools or components
  • Performance issues
  • User interface design

...And of course this list is a continuum rather than discrete items. For example, when learning WPF, I would augment examples from the 10 books I purchased on the subject, as well as interesting blog articles I found. As one illustration, I took Charles Petzold's demonstration of routed events and then hooked up every event in the system to a generic syntax-directed event handler. I could then dynamically subscribe any event handler in the system, after I had seen an event raised. In this way, I could add a button to a form, click it or try to drag it, see the events being raised, and say, "hmm, let me write some code to experiment with what I can do with that." This integrates some elements from performance, user interface design, and third-party components.

Another example: We have our own ETL tool suite where I work, and its sole purpose is understanding "Structure or contents of external data". We sell products to acute divisions within large companies, and often to get our foot in the door, we need some of their data to sample and show what our analysis tools can do for their business. However, we also have to avoid going through IT at a Fortune x00 company, because that would entail requests for funding from IT (just so we can do a sales demo). So we're often stuck getting sample data from someone with marginal IT skills. We need to be able to rapidly prototype whether they sent us the right data. At the same time, this will influence and guide our software factory designs, as we better understand the terminology of that prospect and how it maps to our terminology.

Cheers

I can't help but wonder if

I can't help but wonder if you are including a lot of things here that go beyond prototyping. To me, we build prototypes simply to answer questions and accumulate knowledge. Demos are more for communication and promotion purposes (e.g., a sales demo or a demo scene hack) but can utilize similar quick-and-dirty techniques. Prototyping itself is just a part of the design/development process, whatever process you are using.

Exploratory and introspective programming might be different topics, though I can see how they help with the prototyping process. You might want to look at some of Turkles' work on tinkering (soft bottom up vs. hard top down processes).

Link me to Sherry's work

I have a good grasp on what it takes to understand complex problems quickly. Time-scale should be a huge motivating factor. Who cares if you can prototype something over the course of 30 years?

Data analysis is the most grueling part of datamining, and why many US financial companies are shipping data scrubbing off to India. In order to "type" your result sets, you need to characterize them and understand how data interrelates.

Perhaps some prototyping techniques are only prototyping tools in comparison to what other approaches exist today. We built our ETL tools after reading Oren Eini's I Hate SSIS rant. We sort of realized SSIS was ridiculously over-engineered and designed for "Fortune 500 controls" rather than prototyping; SSIS is built for change control management and allowing 5 people sitting in a meeting to all pretend they understand what a flowchart picture means. Our prototyping tools are built for people who need to make judgments on things in 2 hours, not 2 weeks.

Likewise, I don't think introspection is the key idea in my WPF prototypes. I did instrument my prototype with the ability to type in "edit"(e), "step-into"(si), "step-over"(so), and "continue"(c) in a command prompt. This allowed me to change the GUI in live, real-time. I could also write event handlers that spun up separate processes, effectively giving me a command shell. -- This is not a radical idea, and is discussed in visual programming books like Your Wish Is My Command and Watch What I Do. Symbolics Genera also apparently supported such features, and am pretty sure Kent Pitman once argued on comp.lang.lisp that Genera's Dynamic Windows was superior to Garnet. For background on the model I used to prototype my WPF experience, see: Presentation-based User Interfaces. Other inspirations included Andy Gavin's GOAL language for the PS1/PS2 game console, but I never got around to implementing a network-based linker/loader (because I couldn't decide on whether to use Comet (client/server) or plain old TCP/IP [peer-to-peer]); instead I cheated by monolithically "pulling" assemblies and continously unloading AppDomains on refresh. I also wanted a component distribution scheme similar to the Jade vision Andy Black had for the Emerald programming language (see also HOPL 2007 paper and presentation video) -- components are a typed lambda expression with no free variables (this simplifies the design of, or eliminates the need for, the linker) and allows easier incremental recompilation because there is strong locality of definition.

In short, many prototypes I build allow me to put my hand on the brain of the computer system I want to control. They in no way embody good design practices. The stuff I did not do (better debugging, better automated code distribution) would have required refactoring or a rewrite and turned the prototype from an experiment into a real industrial-grade design.

Her '95 Life on the Screen

Her '95 Life on the Screen book. She also makes some interesting observations about gender and programming related to this subject.