## Flower: a new way to write web services

Flower is a new kind of user programmable web service, especially well suited for applications which process, store, and query XML data sets. Clients of a flower web service interactively modify and extend the code the server runs. This is is the ordinary way to build new flower applications. Flower is a true web operating system in the sense that it forms a self-contained, web-addressable computing environment.

It will be a bit of a grab bag and a longer than average post but I'd like point out some aspects of Flower that might be of interest to PLT'ers.

The Flower "kernel" and Flower language interpreter were designed and first prototyped as a program running on the XQVM virtual machine that I posted about on LtU earlier. The current implementation is a "hand-compiled" XQVM program.

The web service frameworks that are popular today typically combine two languages: a database query language, and a general purpose programming language. Familiar combinations are of Ruby, Python, PHP, Perl, or Java with SQL. These combinations suffer from the famous "impedence mismatch" between database data and run-time data. The impedence mismatch is often compounded when clients expect yet a third data model such as XML.

Flower lives in the XML world, from top to bottom. It uses an XML database. It uses the XQuery language to express server-side computation. And it uses a very slender general purpose language (implemented in 100s of lines of code) for the sequencing of side effects. This approach appears, so far, to be vastly more parsimonious than any other yet tried, in no small part because it suffers far less from the "impedence mismatch" problems.

Flower may also be interesting because of the nature of the tiny language it uses to sequence side effects. Programs are written in a continuation passing style, with the unusual addition that programs can not only capture but also directly construct their continuations from constiuent parts. This ability to construct a continuation affords Flower with, amnog other things, a hook for introducing syntactic abstractions. It isn't precisely an Actors language or a Lisp but those are the best comparisons I've found so far.

I'm curious what LtU'ers think. I'm particularly interested in learning how to better and more concisely present what is "interesting" about Flower.

## Comment viewing options

Can you fix the broken link?

### Fixed

Thanks, looks like the link was entered as relative not absolute. I've made the obvious fix (from basiscraft.com to http://basiscraft.com/). Tom can correct further if necessary.

-t

### Re: impedance mismatch

First off, three cheers for people thinking and implementing new and different ideas.

Presumably there is a reason to sometimes use a relational DB and SQL, even if it doesn't match your programming language's approach to modeling. The right tool for the job and all that. Can you describe some rules of thumb or use cases for when avoiding RDB and using XML (the hierarchy model) instead is good? The FAQ apparently only claims speed as a weakness.

One quick answer is perhaps of course: when you are interfacing or supporting systems which use XML such as some web service systems. Are there other more general things, like making the next interactive travel fare search & reservation site which could serve as examples? On the presentation side presumably it can be XML in that one could be emitting something like XHTML. On the database side, presumably one needs to know if not using the Codd-style Relational approach is appropriate?

Also, what does Flower source code look like? On the face of it I don't particularly relish typing the overhead required by XML vs. othe programming languages, and it visually isn't a particularly readable language to me.

### re: re: impedance mismatch

"three cheers" -- thank you. Warms my heart. I invested my of my autodidactism over the years towards that. That and buck-fifty will buy me a small cup of coffee :-)

"[when RDB? when XML? rules of thumb?]" It is too early to try to give rules of thumb for front-line, banal, production coding projects. Just because of sunk costs, the classic RDB-backed systems will remain a good choice for many projects for a long time. But, I think we can spot some trends:

First, separate concerns. At least it is good to think separately about how data is represented on disk, run-time representation, about query optimization, about the query language choice, and about how application-specific data models are designed.

On disk, sometimes you want a flexible default such as a system that can store arbitrary XML data. Other times, for performance reasons, you want more specialized representations -- traditional RDB approaches or "column-oriented RDB" approaches are two good examples that can make huge bottom-line differences in efficiency. What appears to be developing (e.g., in DB2 and Oracle) are systems that put an XML "face" on relational tables. In other words, the physical model is just a representation hint that can be applied to some data models, from the XML perspective.

At run-time, a consistency of syntax and semantics, combined with a semantics that can be flexibly implemented, is key to eliminating impedence mismatches both in how code is strutured (e.g., eliminating a need for data type conversions) and how code is run (e.g., parsimony in run-time representations). As a trivial example, the database, server-side code, and client protocol might have three incompatible notions of what constitutes a "string" or a "date" -- or they might all agree. Is it obvious why agreement leads to parsimony and efficiency?

On query optimization: my impression from the big guys (DB2 / Oracle) -- and it is only an impression from sketchy clues -- is that on the one hand XQuery demands new work in optimization and, on the other hand, existing optimization techniques apply to the XQuery world (especially in cases limited to XML known to be modeling relational tables).

On query language choice: My opinion is that, but for a small matter of hacking and but for momentum and sunk costs, SQL will retain no advantage at all over XQuery (especially as update and grouping support become mainstream for XQuery). Not only no advantage but XQuery has distinct advantages because it is very good for much more than just database queries, as XQVM and Flower show.

On data models: So, should we stop teaching students how to normalize tables? I don't know but I doubt it. They can (eventually) just learn those things in a different syntax, basically. To really get what all that is about, they have and will always have to appreciate what's going on at the physical level and also why normalization matters at the program design level.

Also, what does Flower source code look like?

Mostly like XQuery with a little bit of XML wrapped around it. The XQuery programs you wind up writing are unusual in many ways, including sometimes needing awareness of the syntax and semantics of the XML wrapper (which is used for sequencing side effects).

To my mind (and experience) it's not bad and sometimes delightful -- but it cries out for syntactic abstraction. I've lately been thinking about how to generalize and apply ideas from Subtext (on LtU) to make it easier to program and to unify client-side, server-side, and indifferent-middle coding.

-t

### This approach appears, so

This approach appears, so far, to be vastly more parsimonious than any other yet tried, in no small part because it suffers far less from the "impedence mismatch" problems.

I am also curious if you have any papers that discuss how Flower reduces the impedance mismatch, be it white papers or conference papers or journal papers or notes on paper napkins folded into airplanes.

In my humble opinion, many people think of "ORM" as a requirement, when there only requirement is database abstraction.

As far as human typeable and human readable goes... I don't care.

Are any of your ideas influenced by Charles Bachman?

I misthreaded my longer answer, which appears below ("my peer reviewed papers").

Short of papers, though: to bring Ruby to SQL for Rails takes "Active Records" or whatever it is called. That's one example of an impedence mismatch. Flower (and XQVM) need no such code.

-t

### I don't need examples of an impedance mismatch.

I don't need examples of an impedance mismatch. I need to better understand the quote highlighted in my reply.

I see. Well, actually, the ActiveRecord example is part of what I meant by extant evidence of effective parsimony.

But for code maturity (e.g., there's at least one exception check that the current code is annoyingly lacking) Flower as it stands pretty much blows the low end of RoR applications out of the water: at least as capable but taking orders of magnatude less code to implement. I don't need anything like ActiveRecord. I don't need a separate reader and serializer for the Flower language -- those are already built into the underlying DB just as analogous functions are built in to relational databases.

Another example is Schema definition. Even a trivial RoR application needs you to go in and define define tables -- no need here. This runs deep: For example, if I do have an (XML) schema, the DB itself already understands that and can enforce the schema as a run-time constraint on updates.

Another example is laziness. DB XML's XQuery "wants" lazy evaluation in some cases for efficient querying and this is conveniently available in the API as well. To keep things simple, Flower 0.5 doesn't take advantage of this but later versions should. The parsimony is in the fact I need take no special steps to add lazy data structures to the Flower language because they just "fall out" of XQuery semantics generally, and this implementation in particular.

Another example is avoiding conversions. In RoR, if I'm going to read a row that contains a number from the DB and convert that to serialized XML for an HTTP reply, the number will go through two conversions: first into the run-time representation and then into the serial form. Here, in a lot of circumstances, especially once laziness is fully exploited, it's just read from disk and stream to wire.

I might have some more on this here wadded up knapkin but is that more helpful?

-t

### Specific example of the impedence mismatch required...

...to bring Ruby to SQL for Rails takes "Active Records" or whatever it is called. That's one example of an impedence mismatch.

Could you please outline the conceptual and technical difficulties that have arisen from the ActiveRecord-style approach to database abstraction and object persistence?

### re specific example question

Judging by the threading of you message I'm not sure if your question is or is not answered by some of the stuff previously posted (but in different branches of the threading tree). Could you please be more specific about what, if anything, is still unclear?

-t

### my peer reviewed papers

I am also curious if you have any papers that discuss how Flower reduces the impedance mismatch, be it white papers or conference papers or journal papers or notes on paper napkins folded into airplanes.

I wish. Well, "notes on paper napkins" is closest to right. I am materially wealthy for the moment (not so clear for February) but financially extremely poor (hence the jeapordy to my material wealth). (By "materially wealthy" I mean that I have a comfortable roof over my head and food to eat and some nice tools.) I am uncredentialled and unaffiliated so I tend to lack access to funds that would support paper writing. I am trying to raise funds to more fully develop the technology, including writing papers for peer review, this year. Wish me luck :-)

In my humble opinion, many people think of "ORM" as a requirement, when there only requirement is database abstraction.

In the post I mentioned wanting to learn how to better, more concisely express this thing. I can't promise to remember to cite your LtU comment (I'll try to remember, where I can) -- but I think that meme of yours is one I'm likely to re-use. I agree, in other words. The word "many" is an important qualifier, of course.

Are any of your ideas influenced by Charles Bachman?

I had to web-search for his name so, no, in the sense that I didn't study his work directly. That does not mean that I was not influenced, though. My exposure to a lot of knowledge about "database stuff" is through self-study of various hot papers and hot products that were contemporary over the past couple of decades. I briefly worked at one of the original OODB companies during their start-up phase. I got at lot of background ideas from trying to understand the context of work on "Berkeley DB" and file system work by multiple people at UC Berkeley.

-t

### I can't promise to remember

I can't promise to remember to cite your LtU comment (I'll try to remember, where I can) -- but I think that meme of yours is one I'm likely to re-use.

...aw, shucks, and I didn't even spell "their" correctly. :-)

Are any of your ideas influenced by Charles Bachman?

I had to web-search for his name so, no, in the sense that I didn't study his work directly.

He is a Turing Award winner, cited for his contributions to database modeling, including the Bachman Diagram. He is also known as being a rival of sorts with Codd. In a nutshell, Bachman = Navigational and Codd = Relational. [Edit: Also, come to think of it, his Turing Award speech is mentioned in Code Complete.]

self-study of various hot papers and hot products

Such as?

From: http://basiscraft.com/background/pub.pdf

Typical solutions contain a network protocol engine (e.g., a web browser, such as Apache1) [...]

You mean web server. There may be more mistakes, but I stopped there for now.

"active object" library
(typically large)

Please elaborate by what you mean here. A reference is probably as good as an explanation.

### the background paper

Typical solutions contain a network protocol engine (e.g., a web browser, such as Apache1) [...]

You mean web server. There may be more mistakes, but I stopped there for now.

Yeah, that's a nasty gaff.

Please be aware that I do not promote those 100 pages as any of: invitingly easy to read, well written, academic quality, or even particularly well proof-read. It was not written for those purposes. It's written to get some of the basic inventions across. It was written in a hurry for legal reasons. It is 100 pages because the patent office imposes a surcharge on longer documents. Unpacking and making that document more user friendly is one of the things I'd like to work on this year.

"active object" library
(typically large)

Please elaborate by what you mean here. A reference is probably as good as an explanation.

The Ruby on Rails source code -- as well as the ActiveRecord idea ported to other languages -- is evidence of size.

That component, of those systems, implement a kind of brute-force object-relational mapping. Relational tables of certain formats can be automatically presented at run-time as mutually referencing objects in the run-time language.

-t

### That's what it sounded like

I figured that ActiveRecord what you meant. The major failing of Rails from my perspective is that David Heinemeier-Hansson only seemed to partially understand his own invention. From an architectural perspective, Rails uses a set of constraints that have been around for a long time. The thing is, though, these set of constraints implemented to their logical conclusion are better suited to maintenance and changing scope than rapid application development.

That component, of those systems, implement a kind of brute-force object-relational mapping.

...and what would an object-relational mapper look like if it were not brute-force?

The Ruby on Rails source code -- as well as the ActiveRecord idea ported to other languages -- is evidence of size.

JMHO: While size is an indicator of health, I think it is a red herring as compared with business common sense. The biggest problem I see with Rails is doesn't go all the way in lowering the sunk costs that prevent rapid application development, which seems to be the major selling point of Rails. In other words, I do not care about the size of my asset, so long as it's value to my business gives me a competitive advantage in the market place. If I ever need to transport that asset to a new language (i.e., perhaps for performance reasons), then I can use part of the profits gained from my competitive advantage to do so.

### the baker and the restauranteur

JMHO: While size is an indicator of health, I think it is a red herring as compared with business common sense. [....] In other words, I do not care about the size of my asset, so long as it's value to my business gives me a competitive advantage in the market place. If I ever need to transport that asset to a new language (i.e., perhaps for performance reasons), then I can use part of the profits gained from my competitive advantage to do so.

That does seem to be "business common sense" and it also seems quite irrational if you consider the exceptional nature of software.

Here's where your kind of thinking works: a bakery. A newly formed bakery, with limited capital, has a choice. Today, they could buy the less expensive, lower capacity oven and immediately begin selling cookies. In the alternative, they can (let's say) raise three times the seed capital, and buy the fancier, higher-capacity oven. Which is the better choice?

Well, it's uncertain whether the bakery's cookies will be a hit in the market place -- there is doubt whether or not the more expensive oven will ever be used to capacity. On the other hand, it's certain that if the cookie is a hit, the less expensive oven will quickly become a bottleneck, and the bakery will have to upgrade. While money is being spent on and time wasted on the upgrade, the bakery will be passing up profits.

If the bakery knew that the cookie was certain to be a hit -- could really prove it -- then actually raising more capital and getting the bigger oven is the better idea. That path saves money in the long run and maximizes the profit over, say, 3 years.

But, the bakery doesn't know. The baker can imagine what demand there will be if the cookie is a hit but won't count on it. They must discount the potential future profits to account for the uncertainty. A $1M profit potential from the higher priced oven is, today, valued at well under$1M. Our entrepreneur-baker has incentive to buy the lower priced oven, possibly give up some future profit to the resulting bottleneck, but take less up-front risk and at least start making some profit earlier. As you say, that initial profit can then later pay for the oven upgrade.

That is probably one of the more important reasons why approximately capatilistic economies tend to develop faster and better than centrally planned economies: they manage risk better.

"Systems software" or "platforms," though, are not like ovens. The analogy simply doesn't hold, most of the time.

In the baker's case, it's useful (from the baker's perspective) to not only think about the ovens themselves as commodities but to more importantly think of the actual commodity being purchased as "cookies per hour capacity". The baker's first decision -- the largest orders of magnitude on the balance sheet -- determine whether to buy capacity for fewer or greater "cookies per hour". There is no essential difference between the two ovens, besides that.

Software, on the other hand, is much more like the restauranteur's problem. The restauranteur plans a business not around the success of a particular cookie, or even necessarily a signature dish. Rather, the restauranteur maximizes long-term profit by maximizing improvisational capacity. Always the restaurant will be buying ingrediants on the market of the day and rendering them as dishes served. So, what of the restauranteur's initial equipment choices?

On the inexpensive end of the spectrum, the restauranteur could buy a fast-food franchise. Extremely specialized equipment can be had at a low price to get the business going: a branded french fry vat; a branded flat-top for grilling burgers; a milk-shake dispenser that accepts flavor cartridges from the franchise parent company.

In the alternative, the restauranteur could set up a sensible kitchen: ovens, broilers, range tops, a convection oven, counter space, refridgeration, good shelving, good knives, a linen service, etc.

The restauranteur's tool choices determine a "basis set" whose "span" defines the range of food products the restaurant will be able to serve.

A kitchen suitable for, say, a "fine dining" experience costs more, up front, than the franchise but there is no comparison to the baker. The baker who buys the smaller oven can transition to the larger with little difficulty. The restauranteur who buys the fast food restaurant will have considerable difficulty growing it into a fine dining establishment.

A retauranteur might reason "I'll buy the fast food franchise, make some profit, then use that profit to set up a fine dining establishment." Some do that. Yet it's a terrible strategy if the demands of diners and the supply of ingredients is chaotic. The restaurant that depends on the price and availability of Acme Milcshak Mix has a hard time, in such circumstances, against the restaurant that can nicely render whatever ingredients happen to be affordably trucked into town.

Modern IT, especially in web applications, is a bit like starting a restaurant in that chaotic environment. Futures are always difficult to price -- it is unclear what the profit will be from the improvisational capacity of a generalized kitchen. Yet, it is an error to overly discount that improvisational capacity, especially when so much of the IT trade press is screaming about the need for agility in reacting to new customer needs and new opportunities at a rapid clip.

So, at the end of the day, when chosing to double down on Rails vs. going all out on Flower -- or anything in between -- the informed investor or IT specialist has to really understand, at levels that experts have some handle on, what the future of the improvisational capacity of the system is -- and what kinds of lock-in result from choosing lesser improvisational capacities up front.

-t

### The benefits of a great virtualization architecture

That does seem to be "business common sense" and it also seems quite irrational if you consider the exceptional nature of software. [...a bunch of culinary metaphors...]

I think we are inadvertently talking past each other. I think your metaphors complement what I have to say, and that you do not actually oppose me in the slightest. The reason to port the asset comes out of demand from the marketplace. If the client wants MySQL, and she is willing to pay for MySQL, then the logic necessary to support MySQL will be added at the cost to her. It seems surprisingly trivial to get purchasing managers to approve these costs, so long as these costs are not accompanied by the purchase of new hardware. As a purchasing manager, when you purchase new hardware for a new solution, your supervisor is going to want to know why the AS/400 system can't be used instead.

You said you liked my point about development requirements usually only requiring database abstraction, and that ORM is not necessarily a real requirement. Now you need to become aware of how true that is, and realize whenever I speak about assets, I'm very much talking about assets with what you call improvisational capacity.

the informed investor or IT specialist has to really understand, at levels that experts have some handle on, what the future of the improvisational capacity of the system is -- and what kinds of lock-in result from choosing lesser improvisational capacities up front.

Isn't improvisational capacity why IBM's System z / zSeries architectural support for full virtualization is superior to EMT64/AMD64 full virtualization, though? x86 onward tries to provide backward compatibility for both the operating system and the applications. System z simply allows applications written in the 60s to run on operating systems written today, on today's hardware, and have it run much faster than the speed it ran in the 60s. In addition, you can purchase special processing units for the mainframe, since there are six classes to choose from (with certain constraints that make some combinations more desirable than others).

There's always going to be code that can't be rewritten. Maybe the source files are missing, or the documentation is scarce or it's accuracy is questionable. The business upgrades the mainframe and the operating system, and copies over the binary.

The exceptional nature of software is that those who are buying the software can end up buying a license to something that ends up not meeting their needs. You can't liquidate a license. It's a sunk cost. Every day there is at least one business somewhere that sinks tens of thousands of dollars into a system that was a mistake from the get-go. I don't think culinary metaphors capture this problem in the slightest. Improvisational capacity doesn't extend to here.

### hmm.. what do i mean by "improvisational capacity"

Ok, I don't disagree we might be talking past each other a bit.

I (now) think that you are at least talking about how businesses do their accounting and how that relates to the market for software. For example, fancy hardware is a capital expense that depreciates over a long time. A software license is a smaller monetary investment and depreciates quickly. Something like the z-series contributes "improvisational capacity" in this context partly because it protects successful past investments in software (that code from the 60s has paid for itself how many times over?) while, at the same time, the hardware is flexible enough to serve as a platform for new tools (e.g., it can host linux images, just in case, say, your new analytics tools needs 'em). Am I reading you right?

And this makes it tough to raise investment for advances in software platforms because, when firms face software needs, they don't mind just buying whatever is around and assuming that they'll shortly just buy the next big thing down the road. So, a plea of "Hey, stop and think, help develop this platform" falls on deaf ears -- they don't feel any pain that can be relieved with a nicer software platform. The software vendors can sort that out for themselves. Yeah?

(Are we off topic from PLT? Yes, certainly. No -- in as much as it relates to the overarching theme of disappointing technology transfer from the PLT world to business practice (and this thread is as good a place as any for that topic))

What do I mean by improvisational capacity? An anecdote from history and a comparison to the situation today:

From Brian Kernighan's contribution to the unix oral history project:

People would come in and they'd say, 'yeah this is nice, but does the system do X?' ... and the standard answer for all of this was, 'No, but its easy to make it do it ... It's sort of like a kit. And if you want a new thing, you can take the pieces out of the kit and assemble them to make your new thing .... So, we used to say ... 'Does it do X?' 'No, but it's real easy. Do you want one by tomorrow? I'll give you one by tomorrow'.

Specific technical innovations at the time created that then unprecedented capacity to problem solve in real time. Innovations such as plain text files rather than records and pipelines as a mechanism for composition. Complementary innovations like C and a more or less portable kernel written in C grew the "span" of this "basis set". One of the reasons unix took off was because managers eventually learned to think about their problems in terms of this new improvisational space.

Things are faster, today. One of the problems / opportunities that I read about IT departments facing these days is that their in-house users are so eager to improvise solutions that IT departments risk losing control when other departments start installing their own wikis etc. The opportunity is how to better enable this but while keeping it orderly.

-t

(Ack to Martin Hardie who's writings drew my attention to that quote from the history project.)

### PLT that is and isn't math

I think the examples of improvisational capacity you are providing, both in metaphor and through historical examples, can best be explained through software componentry and isolating subsystems. Also, an academic resource for the history of pipes might be Douglas McIlroy's first public case for connecting streams of data: Mass Produced Software Components, from the 1968 NATO Conference on Software Engineering.

I use the word asset and even the phrase business asset a lot. I don't mean "business" in a way that drifts away from PLT, but in the fact programming and PLT is a constraint optimization problem. I'm not promoting or denigrating any particular architecture, either. That is probably not the right way to solve problems. Instead, distilling the accomplishments of those architectures into a strong body of knowledge and creating a rich problem domain analysis is the right way to go. The design of many programming languages, like the design of many programs, follows this approach.

Are we off topic from PLT? Yes, certainly.

It is not at all off-topic. Anyone who claims that is too busy searching this discussion for mathematical notation. Programming language theory does not just discuss mathematics, but how to solve software programming problems. Object-oriented programming is intuitively appealing, but not math. When should you use composition and when should you use derivation? I don't know; I can't present a mathematical proof to you that shows you when to use which. Contrast that to the relational model, where I can present to you Codd's normal forms and, perhaps less formally and more lightweight, Date's 12 Rules; I can provide you with a mathematical guide to normalization. Yet, Nygaard, Dahl and Kay have all won Turing Awards, and both award selections cited contributions to the notion of object-oriented programming.

Here's a rule of thumb: If you ever think something is off-topic, but still important to you, then you can reach me through the e-mail I provide in my LtU profile.

### ok.. i should stop apologizing

Yes, I'm too timid.

As you say, we seem to be having trouble finding points of disagreement. Another bon mot of yours worth underscoring (without in any way detracting from the importance of theory / math) is that programming and PLT is a constraint optimization problem. Yay! Engineering, at least as I recognize it.

-t

### The sequencing mini-language

I'd like to see an explanation of what that is and how it works.

### the sequencing mini-language

I hope you don't mind quick and informal. There's not a lot to it besides power.

In the ontology of Flower the main data structure is the "document/message" which unifies messages, files, agents, objects, etc. From the XML perspective, these documents are serializable and persistable and are kind of like RFC822 on steroids. They look like:

<msg>
<envelope> ... transmission meta data ... </envelope>
<meta> ... content meta data ... </meta>
<body>... arbitrary ... </body>
</msg>


So, the Flower "loader" combines one of those messages retrieved from a database with any parameters that might have been passed with the request. The details are banal but the result is a message with some envelop meta-data that looks like this:

  <envelope>
<params>... encoded request parameters ... </params>
<filter>
<service  uri="service identifier">... static service params...</service>
<filter>... the default continuation ...</filter>
</service>
</filter>
... other, arbitrary envelope meta-data...
</envelope>


The single step rule involves binding that message to the XQuery "dynamic context" variable "$results" and then:  while there is a non-empty, recognized service request rewrite the message in$results[1] as described below
perform the requested side-effecting service
and the service step may rewrite $results elihw  The "rewrite" step is trivial. It removes the filter from the envelope but then adds back the default continuation filter, if there is one. It also adds in the current service request. Schematically, this envelope content:  <filter> <service>X</service> <filter>Y</filter> </filter>  is rewritten as:  <service>X</service> <filter>Y</filter>  A "service" can be just about anything but hopefully just a few general purpose services do most of the heavy lifting. One example of a useful service is the side effect of stashing a copy of the current state of the message in a database. Another example might be issuing an HTTP request to some other web service and folding the results into the envelope meta-data or appending them to the contents of the XQuery "$results" register.

Meta-programming is enabled if you include a service that, in the spirit of the agents view, we could call "become". Such a service request would have the form:

  <service uri="std:become">... XQuery source here ...</service>


That service would execute the XQuery in question and that query would run with "$results" bound as described above. The output of that query becomes the new contents of the "$results" register. Thus, programs can construct their own continuations by computation.

In Flower 0.5 the "become" service is not included (it's about 10 lines more code if you want to add it).

In Flower 0.5, when a Flower language program terminates (no service requested, unrecognized service requested, or out of continuations) the resulting message is sent back to the invoking requestor as a reply.

And that's it!

An interesting aspect of this is that it recapitulates how side effects and flow of control and syntactic abstraction are implemented in the underlying XQVM model. XQVM itself has no notion of messages per se.. it's lower level. But the compositional models are analogs. Flower started life as an interpreted XQVM program in a fully general XQVM interpreter -- Flower 0.5 is a restricted, "hand compiled" version of that interpreted program (for performance reasons).

-t

### Query Processor

Flower as distributed requires the XQuery processor from Oracle, Inc.?

### re Query Processor

Flower 0.5 is built to be linked with "DB XML," an open source XML database that is maintained and distributed by Oracle Inc. (It is obvious but there's no harm in mentioning that Oracle and I are not affiliated, that Oracle has no affiliation with Flower, and no endorsement by (or of) Oracle is implied.)

DB XML combines four, separately developed open source packages, from various sources and with various histories. Low level storage management, transaction support, and indexing support comes from "Berkeley DB" (which also currently is maintained by Oracle). XML parsing and printing, etc. come from Xerces, an Apache project. The XQuery engine is Xqilla and combines contributions from various sources. The tie that binds these things together and presents an XML database is "DB XML," originally an open source product of Sleepcat Software which has since been acquired by Oracle.

The dependencies of Flower on DB XML are slender and conservative. I should be a simple matter to substitute any XML database and XQuery processor.

-t

### Wiki?

Are you going to recommend some web address (apart from this node on LtU) to be dedicated to information exchange among people trying Flower out?

### re wiki

Are you going to recommend some web address (apart from this node on LtU) to be dedicated to information exchange among people trying Flower out?

I don't have the resources to host a wiki and am not too enthusiastic about the free hosts I've tried. However, I've just now created a mailing list (a "Google group") for this: