Lambda the Ultimate

inactiveTopic Case Study in Practical Language Use
started 5/23/2004; 8:01:41 AM - last post 5/26/2004; 8:21:19 AM
andrew cooke - Case Study in Practical Language Use  blueArrow
5/23/2004; 8:01:41 AM (reads: 334, responses: 22)
I work on a project that provides software to process astronomical data (observations from telescopes). It has been around forever (30 years?), and is still the main solution used in professional astronomy. It's great strengths (imho) are solid engineering (30 years of development without disintegrating into spaghetti is impressive) and its huge accumulation of knowledge and practical experience.

However, I managed (yes, stupidly) to declare it an embarrassing joke - or words to that effect - in an internal email discussion last week. So, motivated largely by the desire to save a small portion of my ass in an upcoming meeting, but also by a genuine wish to improve things, I'm asking for advice here on how the project might move forwards. This is, of course, completely unofficial. I'm asking on lambda, particularly, because I feel many of the problems are related to language, and also in the hope that it's a useful example in how/why/if the languages we cherish here are not so easy to use in the "real world".

The user interacts with the system via an imperative, interpreted, typed scripting language with a C-like syntax. Individual scripts correspond to tasks, and have parameter lists that can edited. This language has some limitations - it is slow, has no support for functions (other than calling another script as a task), closures, objects, or other user-defined structures (no pointers), and collections must either be managed as arrays (typed, with pre-defined sizes and limited memory) or by writing to temporary files (which is not as bad as it sounds, since the language has support for scanning data from these files).

In other words, the interface is modeled on a Unix shell and has the expected limitations for writing large programs.

To actually process data, therefore, tasks call compiled code. The source is passed through a pre-processor to generate Fortran 66. The pre-processor adds C-like syntax, C's dynamic memory model (with associated unsafe types), limited generics (template based, over the basic numeric types), and syntactic sugar for a hidden return value that works a little like exceptions.

This all runs on a variety of platforms. Historically these included Vax/VMS, and various ancient Unices. These days it's used with Linux, OSX, SunOS, Solaris, HP Unix, possibly DEC OSF1, etc - ie anything Unix-like. There is no support for Win32.

While it's not Lisp, this design clearly got a lot right for its time, hence its success. However, it does have some problems. It is difficult to integrate with other systems and has little support for distributed computing. It is also something of a pig to program. Astronomers tend to start writing tasks in the scripting language, then hit the limitations, and end up with a mess. Formal internal development tends to be compiled, but is (in my opinion) hampered by the lack of support for abstractions that are pretty much standard these days. No abstract interfaces, objects, closures, encapsulation, first class functions, memory management or continuations. Combine that with unsafe typing and a lack of data structures (almost everything uses arrays or explicitly constructs C-like structs), and you can see why I'm sometimes a little frustrated.

But how could it be better?

It seems to me that the single weakest point is the scripting language. It is so restricted that all "serious" development is pushed down in to the compiled layer. So one solution might be to replace the scripting language with something else. A third party Python implementation exists and there is talk of adding hooks to the interpreter so that various other languages could be integrated. Integrating Python would allow more of the "astronomy know-how" to be implemented at a higher level, with only number-crunching left to the compiled code.

The big advantage of this solution is that it is largely progressive. Functionality cold be slowly migrated from the compiled layer to Python without the need for a complete re-write of existing code.

Or maybe it's better to throw everything away and start again? What other language has support across every common variety of Unix and produces fast numerical code? The only answer I know of is C or C++, which, to my eyes, isn't a huge improvement. Maybe the numerical work should be done using Blitz? What would the scripting language be? Am I being closed-minded about C++?

Or should a single language be used - OCaml is the nearest I can find to a single language solution. What does it lack?

What about distributed computing? This is becoming increasingly important, as is more general integration with other processes. For example, we want to provide a web interface to data, and to allow remote data processing via the web. How do we do this? The web interface is going to be J2EE (Java) based. This suggests using CORBA for integration. C++ is a possibility. Last time I looked, OCaml didn't have CORBA support (it almost does, but when you poke around, it doesn't). There is, of course, no support for capabilities in what we have now, or any other attempt at security.

When you remember all the knowledge that's hidden away in the code, and look at the restrictions and requirements, and realise that this is maintained by only a handful of people, it's hard to see a good solution. I'd appreciate any suggestions...

Andris Birkmanis - Re: Case Study in Practical Language Use  blueArrow
5/23/2004; 8:47:46 AM (reads: 323, responses: 1)
Not sure about the scripting and crunching (though OCaml indeed sounds good), but why do you need J2EE for web?? And why CORBA to integrate with that? Is this pushed from the top?

andrew cooke - Re: Case Study in Practical Language Use  blueArrow
5/23/2004; 9:16:48 AM (reads: 322, responses: 0)
well, the web thing isn't my project and i don't know the history, but the archive behind the web interface will eventually be quite large (many terrabytes of data) and i guess j2ee/sql was chosen because that's what people tend to use for that kind of task (as i said, i wasn't here when the decision was made, but perhaps there was a political battle to use java rather than the fortran-66 based language i mentioned earlier - there's very little experience within the group of anything else - or perhaps it was a management decision. i really don't know.).

i didn't describe it in detail, because i'm more involved in the number crunching side of things, but it's not "just" a web site. it has to provide support for astronomers to manipulate data remotely, search catalogues, display images, etc. the archive part has to provide a uniform interface to very different kinds of data, allow them to be grouped, searched, manipulated, etc.

as for corba - the architecture for the web interface and archive processing is intended to be scalabale (ie will likely end up distributed across several machines) and since corba supports distributed computing and is the basis of j2ee (i understand) it seemed the obvious choice.

but if you think something else would be better, please say so! that's why i posted here!

Isaac Gouy - Re: Case Study in Practical Language Use  blueArrow
5/23/2004; 3:00:44 PM (reads: 305, responses: 0)
Without a much richer appreciation of what the people who deal with this stuff experience as frequent problems or barriers, it's difficult to make any sensible suggestions.

Mark Evans - Re: Case Study in Practical Language Use  blueArrow
5/23/2004; 7:09:14 PM (reads: 275, responses: 0)
A possibility is Lush.

Ehud Lamm - Re: Case Study in Practical Language Use  blueArrow
5/24/2004; 2:42:38 AM (reads: 253, responses: 2)
Ah, just the sort of challenge I like. Without more information it is hard top give sound advice, but why should this stop us?

I am great fan of the DSL approach, as readers here know. This is from practical experience (described hee in the past) working on an in-house query DSL for a fairly large IT shop. This is quite different than number crunching, so I may be off base here, but from what I know of math based DSLs (Mathematica, Matlab etc.) it seems that in this domain DSLs are also a good idea.

A DSL can help you deal with security, distribution etc. and reduce programming effort by scientific staff. Another advantage that may be important for you is optimization. It is easier to optimize a DSL, if you design it correctly.

So I wouldn't go for a "general purpose" scripting language.

Regarding the programing constructs required, I am not sure you really need continuations, and even objects. The rest should be fairly easy to implement.

The in house DSL I worked on was either interpreted or compiled into machine code (a fairly basic compilation process was all we needed). This can solve some of the problems you discussed.

I think typing can really help you application domain, so I'd give this aspect some thought. This can help with speed as well as with some of the other issues you raised.

I wouldn't tie myself to Python for such a complicated system and programming domain. Python has one implementation and it is still a moving target (i.e., an evolving language).

andrew cooke - Re: Case Study in Practical Language Use  blueArrow
5/24/2004; 8:29:28 AM (reads: 229, responses: 1)
it's difficult to know what's obvious and what not, so i tried to explain the things i thought restricted available languages. if people could provide possible solutions, rather than saying "not enough information" we might iteratre to a useful result.

the problems with the current system, in my opinion are:

- development is too slow. i spend too long writing (and, worse still, debugging for memory access errors) basic collections classes etc.

- memory errors appear in released code because not every combination of parameters was tested.

- there's little support for distributed computation.

- it's difficult to extend code that exists, because high-level abstractions are missing. function pointers do exist, but are infrequently used (again, they introduce more chances for memory access errors).

- it's difficult to integrate third party code, so everything tends to be done in-house.

- the scripting language is too weak to be used for assembling applications, so the "astronomy knowledge" goes down in the compiled layer. we end up with several large monolithic packages rather than lots of small scriptable ones.

lots of these problems can be ameliorated with hard work, good practice, and tools like electric fence. but it does cost time and effort.

the scripting language that exists is implemented as an interpreter in the compiled language (i'm avoiding names, because i'd rather this did not appear on a web search). i agree that neither objects nor continuations are important, i was just listing what's not there. to extend it to do dynamic memory management, suport functions, and compile would probably require a rewrite (the memory management is probably pretty fundamental). compilation would have to be transparent to the user - maybe compiling down to some kind of byte-code would be sufficient.

it's perhaps not clear to others how much "business knowledge" there is in a system like this. the number crunching could be duplicated quickly, but there's a huge amount of astronomy know-how in there. at the moment it's deep within the compiled code. i think it would be better pulled up into the scripting language - one thing i've tried to work out the details for in the past was scripting on top of a layer that would automatically distribute computing across a net of machines, but it was too much to change at once.

as far as specialised types go - two things that strike me as "obvious" are physical units and error estimates. these are both managed separately from values (handling and propagating error estimates, or the lack of it, was one of the things that provoked my intemperate outburst).

thanks for the comments so far. hope this helps. apologies that it's a bit scattered.

andrew cooke - Re: Case Study in Practical Language Use  blueArrow
5/24/2004; 8:32:55 AM (reads: 220, responses: 0)
ooh! thanks for the lush pointer. missed that earlier.

andrew cooke - Re: Case Study in Practical Language Use  blueArrow
5/24/2004; 8:41:32 AM (reads: 221, responses: 1)
if users meant "end-users", i think the things people complain about are poor responsiveness from us (developers), difficulty in developing tasks themselves (scripting), missing functionality, lack of integration with other systems.

a lot of that comes down to us not having the time/funding to do everything people want. that's why throwing everything out and starting again isn't really an option. better pr might help too, but that's not an issue for this forum. it's also why i'd like a solution that makes the code easier to extend/modify.

Matt Hellige - Re: Case Study in Practical Language Use  blueArrow
5/24/2004; 8:52:50 AM (reads: 216, responses: 0)
As far as the scripting language goes, I'd take a serious look at Lua. I've been very impressed with what I've seen.

Andris Birkmanis - Re: Case Study in Practical Language Use  blueArrow
5/24/2004; 9:01:20 AM (reads: 211, responses: 1)
Not that my post is going to help you...

We had similar problem in a web app engine/IDE product - end-users were unhappy with response times, as there was a very limited team of core developers knowing the innards of the tool. We are continuously refining the core, making it smaller and smaller, while delegating more to the end-developers.

So yes, you are right, you need to separate problems in two or more layers - the harder problems the smaller group of them. Sounds like a DSL...

I think you might elaborate on what are your distributed computing requirements.

Regarding integration with J2EE - yes, currently RMI over IIOP beats current WebServices bindings (SOAP) performance-wise, but that can change. On the other hand, it's OMG again, so not sure what you gain by adopting WS, except it's more hot (and unstable). Having XML-based binding? Is that a benefit for you?

Paul Snively - Re: Case Study in Practical Language Use  blueArrow
5/24/2004; 10:31:35 AM (reads: 205, responses: 2)
Boy, this is a tall order to respond to. I'll have to content myself with cherry-picking what to focus on, which I'll do on the basis of a combination of experience and recent learning.

First of all, I feel compelled to point out that I was a J2EE developer for some eight years, and J2EE isn't based on CORBA. When Remote Method Invocation first hit the scene, it took some heat because it was a Java-only solution, so eventually Sun released RMI-IIOP, allowing RMI to be used in conjunction with the Internet Inter-Orb Protocol. But this is asking the wrong question, namely whether a Java-only or language-neutral wire protocol should be used for Remote Method Invocation, as opposed to the right question, which is "Should I use Remote Method Invocation in my distributed application?"

But the moment you talk about building a distributed system you introduce questions that strongly suggest backing up a step from the conventional wisdom, because the conventional wisdom in building distributed systems doesn't hold up very well in practice. CORBA has been rejected in the marketplace by all but some extremely well-heeled institutions who can afford the army of trained expert consultants in CORBA development that it takes to field these systems (full disclosure: I used to be one such consultant). It's not clear that the place to back up to is Java; nothing about Java supports distributed systems especially well, and in fact aspects of Java's concurrency implementation and insistence upon synchronous RMI make building reliable distributed systems in Java much harder than it should be.

Having said that, some other folks have already endured the pain and provide toolkits that do some amazing things, so you might want to take a look at JGroups, or perhaps one of the good Java Messaging Service implementations out there such as ActiveMQ or JORAM. Certainly a messaging-based approach is preferable to an RMI approach in Java, IMHO. The downside is that it's another API to learn and another product to select: it's not integrated into the language and its runtime environment, so there are still plenty of thorny design and implementation issues to contend with in such a system.

Which leads me to my final question/suggestion: if you know that scalability, availability, and flexibility are key, why not consider tools that support these concepts from the outset? I suppose Erlang would be one such choice, but I have to admit that I really have Oz in mind: its native support for declarative and/or message-passing concurrency, distribution, network-awareness, security considerations, and increasing number of available libraries, plus a very good FFI for interfacing to existing C/C++ libraries, make it an extremely attractive candidate, IMHO. As an example of this kind of application in Oz, see MathWeb. It seems to me that by choosing a tool like Oz, you can spend less time dealing with the problem of building a highly-available distributed system, and concentrate on the more interesting and useful problem of supporting astronomers in their work.

Those are my thoughts. I look forward to any comments or questions that they provoke.

andrew cooke - Re: Case Study in Practical Language Use  blueArrow
5/24/2004; 11:10:27 AM (reads: 201, responses: 0)
I think you might elaborate on what are your distributed computing requirements.

new telescopes are bing planned which take huge amounts of data. to reduce this quickly requires more processing power than a single workstation. i don't know who decided to go with networked computers rather than a more expensive machine, but that seems to be the direction.

someone is already working on this, but building everything from the ground up (mainly in this fortran 66 based language, again). various packages exist, but seem to not be suitable (again, don't know details). i really don't know enough to say more, just seems like it's an area that has been integrated more + more into languages, yet we're not taking advantage of that work.

omg is not a bad word here - we like uml :o) more seriously, i'll ask why that decision was made, because i don't know. there's supposed to be an international collaboration making standards for much of the interface, and i'd guess that would involve web services, but (1) i'm out of the loop and (2) i bet it's being argued about endlessly (although the reason i can't ask the person for more info now, is that he is at a related conference, so perhaps there is progress).

andrew cooke - Re: Case Study in Practical Language Use  blueArrow
5/24/2004; 11:52:44 AM (reads: 199, responses: 1)
thanks. i think you're focussing more on the java/web project, which is separate. maybe i've not made that sufficiently clear.

the data reduction software will be called by the web/archive system. the web/archive system is j2ee and is new. while i appreciate help on that, and will pass it on, my main concern here is for the 20+ year old data reduction system.

really, i do appreciate any comments, and maybe i've misunderstood some things, and mis-explained others, so this is just clarification, not criticism. (also, just writing this down is helping hugely. it's very difficult here because people already made up their minds about things, and have been fighting political battles over them for years. so it's useful to hear completely different opinions.)

availability is not critical for anything, except the back end systems that save raw data from telescopes.

so i'll pass on all the comments about corba etc to the person working on that project (with emphasis that i think the source is reliable ;o)

from your mathweb comments i guess that the last paragraph is referring at least partly to the data processing system. are you suggesting that oz would be a good bassis for replacing the whole system, or that it would form the scripting/control layer on top of a core that, intiially, would include all the existing compiled code?

presumably the latter, since speed is critical. i discarded oz, along with pretty much everything else, because i doubted it would be supported on the platforms necessary. but the number of platforms is decreasing, so maybe that's not such an issue. i will think seriously about how oz might work.

oz specific question, then. one of the useful things about the solution we have now is that tasks defined in the scripting language have parameters - typically 5 to 30 - that are persistent and editable. presumably something similar could be done in oz with persistent object attributes. does that make sense? any other way of implementing that?

the idea is that many parameters have default values that are fine; otehrs need to be set only once, and the few that change from one invocation to another can be set at the command line. migth seem like a silly little thing, but it's very useful and any replacement would have to replace it cleanly.

also, will oz still be around in 20 or 30 years time? that's how long the existing code has been carried forwards, and it's been possible because fortran 66 (sometimes via f2c - conversion to c), is still around.

andrew cooke - Re: Case Study in Practical Language Use  blueArrow
5/24/2004; 12:30:04 PM (reads: 196, responses: 0)
also, will oz still be around in 20 or 30 years time?

this was badly phrased. i doubt oz will be around that long - how do i deal with that? does it have a sufficiently small core that we could maintain it ourselves, for example?

Sjoerd Visscher - Re: Case Study in Practical Language Use  blueArrow
5/24/2004; 3:21:45 PM (reads: 170, responses: 0)
I agree with Matt on choosing Lua for the scripting. It is designed to be "a language for extending applications". Just read the abstract and conclusion of this: http://www.lua.org/spe.html

If I understand correctly it should compile on all those Unices. And it is easily extendable so it's a good start for creating a DSL.

There also seems to be a distributed Lua, called ALua: http://alua.inf.puc-rio.br/

Paul Snively - Re: Case Study in Practical Language Use  blueArrow
5/24/2004; 11:14:15 PM (reads: 140, responses: 0)
andrew cooke: thanks. i think you're focussing more on the java/web project, which is separate. maybe i've not made that sufficiently clear.

No, I was trying to talk about the whole system, but since you mentioned J2EE and I've spent a lot of time there, I wound up kind of focusing on it.

andrew: availability is not critical for anything, except the back end systems that save raw data from telescopes.

Fair enough; that simplifies things!

andrew: so i'll pass on all the comments about corba etc to the person working on that project (with emphasis that i think the source is reliable ;o)

Heh. I appreciate that. I had high hopes for CORBA at one point, and I actually think that a good modern ORB with good services such as Cos Naming and an Interface Repository is a wonderful thing. There's even a very nice scripting language, CorbaScript, that makes CORBA about as approachable as I think it can reasonably be. But it's still a lot of infrastructure to develop for, configure, and maintain, and I just think it can be done better.

andrew: from your mathweb comments i guess that the last paragraph is referring at least partly to the data processing system. are you suggesting that oz would be a good bassis for replacing the whole system, or that it would form the scripting/control layer on top of a core that, intiially, would include all the existing compiled code?

I'd really meant to suggest that Oz might be rather broadly applicable, but clearly there are going to be hard-core cases where you'll need to call out to C or C++, or FORTRAN 66. Oz has a very nice FFI, so you can do that sort of thing, but then it really doesn't address the issue of raising the astronomy knowledge up out of the compiled code, unless there's some way that the "astronomy knowledge" aspect and the "high-performance numerical computing" aspect can be separated, e.g. there's a hairy matrix transform that's done in C, but what some combination of hairy matrix transformations means in astronomical terms is written in Oz. To me, that would be the hardest aspect of the rearchitecture.

andrew: oz specific question, then. one of the useful things about the solution we have now is that tasks defined in the scripting language have parameters - typically 5 to 30 - that are persistent and editable. presumably something similar could be done in oz with persistent object attributes. does that make sense? any other way of implementing that?

It makes perfect sense. Oz, like many modern programming language, has a "Pickle" module that allows the programmer to easily persist language entities as long as they are bound. These can be simple values (integers, strings...) or complex ones (records, objects)...

I should also point out that this observation is instrumental to Oz's ease of distributed programming: a language entity is made available to connect to by creating a "ticket" to it using the Connection module. A "ticket" is just a string that identifies the language entity as a path of some kind, from host and port to process to entity in the process. Note that I said that a ticket is a string. What can you do with a string? Pickle it! So the easiest way for an Oz process to get a reference to an entity in another Oz process is to create a ticket to the entity, pickle the ticket, get the pickled ticket to the other process, unpickle the ticket, and accept the ticket, at which point the 2nd process now has a reference to the entity in the first process. All the thorny details, e.g. if the entity is a Cell and the first process changes the value of the Cell, meaning that the 2nd process needs to be notified so its view is updated, etc. are taken care of under the hood, although those processes can be customized if necessary.

All of this is a lot harder to explain than it is to do: I suggest you install Mozart-Oz if you haven't already, then go through Concepts, Techniques, and Models of Computer Programming paying particular attention to Chapter 11 regarding distributed programming.

andrew: the idea is that many parameters have default values that are fine; otehrs need to be set only once, and the few that change from one invocation to another can be set at the command line. migth seem like a silly little thing, but it's very useful and any replacement would have to replace it cleanly.

It doesn't seem the least bit silly, and IMHO any language designed in the last decade that doesn't support some form of persistence out of the box, or at least easily via a small library that doesn't need per-application customization, is too brain-damaged to concern myself with.

andrew: also, will oz still be around in 20 or 30 years time?

this was badly phrased. i doubt oz will be around that long - how do i deal with that? does it have a sufficiently small core that we could maintain it ourselves, for example?

There are, I believe, a few components to an answer to this:

  1. Oz has already been around in one form or another for over a decade; I suspect it won't vanish suddenly.
  2. Mozart-Oz is open-source software, so if you're concerned about it disappearing, by all means grab the source code.
  3. Oz is indeed built upon an extremely simple core. CTM exposes its concepts, in fact, by means of the "kernel language" approach: a kernel language supports only those concepts that are absolutely fundamental to the model. In CTM, each new model discussed adds exactly one new concept to the kernel language. A pragmatic language is then defined that operates by translation to the kernel language. All of this is documented, in the end, in Appendix D of CTM, where the most general kernel language description fits in half a page, supported by the portions of the text that describe each supported model, and by Appendix C, which discusses the full Oz syntax. Of course, among Oz's innovations is supporting this level of generality efficiently.

So hopefully this has provided some additional food for thought with respect to your choices. Even if Mozart-Oz should prove insufficient for your needs for either technical or political reasons, I hope that you're sufficiently encouraged to at least download the system and kick the tires, perhaps via CTM. I think you'll like what you find.

Andris Birkmanis - Re: Case Study in Practical Language Use  blueArrow
5/24/2004; 11:56:29 PM (reads: 140, responses: 1)
From what I know about Oz (not much), its main value is supporting multi-paradigm programming, including concurrent programming. From what I know about Andrew's system (even less), it looks like a programmable multi-pipelines data processor. Andrew, do you need concurrency at all? Or is plain parallelism what you really want? Do astronomy experts program small stream processors, which are then wired and distributed across the farm, or do they provide smart wiring of pre-built processors?

I guess the main question is - what is the balance between streamness and messageness in your system? That may influence a lot.

andrew cooke - Re: Case Study in Practical Language Use  blueArrow
5/25/2004; 7:22:00 AM (reads: 121, responses: 0)
astronomy experts currently do neither, much, but when talking of things in the future, "pipelines" is the word that's heard most often.

i don't understand the streamness/messageness distinction, but the way things are done in the current pipeline systems is to process N images on N machines. the bottleneck comes when these images need to be combined (as they invariably do).

in other words, it's pretty crude and treats fairly large chunks of data in parallel.

if multi-processor machines become more common (and that seems likely - multi processor cores, particularly) then we'd need to support parallelism within an image. that would probably be handled by library routines in the compiled language which tend to work on image rows whenever possible. this wouldn't need to appear in the scripting language - it would be transparent to the user. memory access issues mean that we already avoid processing techniques that don't access images sequentially, whenever prossible.

i'm a bit busy today - it's my last day on shift and i need to sort things out for a trip to hawaii next shift - so forgive me for not answering all posts here. they are very much appreciated!

(and i did play with oz quite a while back - my daily commute to my last job involved reading maybe the first 1/3 of the oz book draft. it never grabbed me, for some strange personal reason i can't explain, but that's something i need to work on :o)

Isaac Gouy - Re: Case Study in Practical Language Use  blueArrow
5/25/2004; 1:14:44 PM (reads: 108, responses: 0)
the problems with the current system, in my opinion
Do others have a matching problem list (including users)?

My concern would be that the project may become a promise of jam tomorrow - there needs to be a mix of concrete short-term issues that can be resolved as part of the technology upgrade.

Noel Welsh - Re: Case Study in Practical Language Use  blueArrow
5/26/2004; 7:51:56 AM (reads: 79, responses: 1)
Here's a combination no-one has suggested:

SAC for fast number crunching, which features "implicit support for non-sequential program execution on multiprocessor systems", and the scripting language of your choice (well, Scheme :) to tie it together. SAC can be found at http://www.sac-home.org/

andrew cooke - Re: Case Study in Practical Language Use  blueArrow
5/26/2004; 8:21:19 AM (reads: 82, responses: 0)
that's nice (sac). one thing i've avoided here, but which is a huge factor, is how to sell any changes (someone else asked for problems that would be solved, which is related to that - part of the reason i've not replied, apart from running round like crazy at the mo, is that i want to avoid being too critical of what we have, especially when it gets more into "perceived" problems of users). anyway, a c-like syntax is a big plus just because it's more familiar.

now i must get back to trying to get linux working on my laptop so i can demo the new code i've been working on (there's a reason why windows support would be a plus ;o)... thanks again.