Amazon Flexible Payments Service

When I heard of Amazon FPS (an overview of which can be found here), I knew a DSL must be hidden somewhere.

And indeed, the GateKeeper language is a special-purpose language for specifying (declarative) payment instructions.

Once again we see the importance of language design skills in today's marketplace...

So what do LtU readers think of the design of the GateKeeper language?

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.


They have a "blackboard" that consists of logic (that is, single-assignment) variables. In some respects, this feels like, but not like, Soutei.


cool but not really RESTful - via Leonard Richardson

I was asking about the

I was asking about the language design, not the web service design...

How is it different from configuration file, in essence?

(I really lack the credentials to challenge, but...)
How is it different from configuration file, in essence? Any programmer who has written an application in non-interpreted language has had to come up with a configuration file. This takes it half a step ahead. This programming language lacks proper branching or looping, hence I'd be reluctant to call it a "programming language".
On the other hand, giving power users the ability to easily interact with complex systems is a challenging and worthy task, for which the GateKeeper seems to be fit.
Perhaps somebody will come up with a generic, powerufl configuration language?


Perhaps somebody will come up with a generic, powerufl configuration language?

That's actually how Lua got started (in a nutshell).

declarative interacting calculations

I designed this language when I was at Amazon. The reason it doesn't have any branching or looping is because it is a declarative language (yes like a configuration file). In other words, it does not execute in a sequence it theoretically executes all statements at the same time. The purpose of this language is to facilitate date and currency calculations where different parties write different parts of the calculation. It also facilitates assertions statements such that if any of the statements evaluate to false, the transaction will not occur. This allows parties to set conditions and rules, similar to a dynamic contract, but via these GK code snippets. All of the snippets are combined together and run at once.

Really New?

Is the GK language a new language a merely a dialect of another declarative language? Did the team behind GK evaluate existing declarative languages for extension via a module before going ahead and creating a new language?


We really wanted something that was specific to the task, no more no less. There was a security concern about giving too much capability, as might be found with a more general language. But in general, I don't think there's anything wrong with small, special purpose languages. I think in some situations like this one they can be very expressive of intent, and therefore can simplify complex tasks. Using existing languages has an advantage in the familiarity of language constructs, if that language is commonly known, but also there's some impedence mismatch if the language implies more capability than the application really needs or supports.

We don't call them

We don't call them domain specific languages for nothing...

(retracted comment)

The comment, repeated below, was a non-sequitor. I misread the threading (and, if you made my same mistake, my comment made perfect sense :-). This entire comment can please be deleted, if convenient to the editors. Sorry.


Retracted comment was:

I don't follow you.


language design vs. web

I'm disappointed that they promote a special, text-based syntax for this language. That design appeals because the syntax is comfortable if all you look at is the narrow domain -- it might be a win for their market -- but they give up composability and syntactic abstraction in "the web way". They also make it needlessly harder to process these programs.

It reminds me of how sometimes people will do a lot of macrology and/or read-table hacking to give a lisp an infix, non-s-exp syntax. Ok, you can do that and maybe in rare cases it's justified but, as a general rule, all you're doing is encouraging people to use a syntax that's hard for programs to manipulate in an environment where s-exps (easy to manipulate) are the norm.

Especially for a purely declarative language. It should be an XML-based syntax. They half-heartedly tip a hat to XML by using XML types (e.g., for dateTime values). In the future, though, a lot more of things like this will be done in XML.

As for semantics:

Over the DOM type system, the XPath/XQuery standard functions provide a suitable set of purely functional primitive operators. Those should be included by reference to define their expression operators so that we don't have divergence in, for example, definitions of equality or order. Probably they are doing something very close to that internally although one fear is that they are sloppily locking applications into depending on quirks of, say, SQL operators rather than XPath/XQuery operators.

Similarly, over the XPath/XQuery abstract notion of simple expressions, there is already an established set of definitions and elaborations on the "dynamic context" -- pre-defined "variables", base URIs, default DOM nodes, etc. Their blackboard may or may not be usefully isomorphic to some elaboration of how dynamic context is used -- whichever the case, the language specification should be in terms of established notions of dynamic context.

The particular domain -- payment systems -- is fascinating for its rich, largely "horizontal" complexity, innit? Just as we have things like DOM for structured data, it might be interesting to have an abstract type model (embedded in DOM, of course) of the "vocabulary" of payments. Just as we have things like XSLT and XQuery, a DSL could be built on top of the payment model.


p.s.: I don't feel so bad about XQuery having a special syntax, though it's hard to articulate why that language in particular is such a different case.

It sounds to me like you'd

It sounds to me like you'd be entirely happy if there were a parser/prettyprinter pair going from the DSL to an XML representation of the AST though? I really don't think it's appropriate to insist that end-users work with XML syntax all the time.

XML syntax foo

For web stuff like this, parsers and pretty-printers to a textual syntax can be fine things, especially if they are presented with practical open source implementations as XQuery and XSLT functions. For example, you might want to use a plain HTML <textarea> to edit code in this language.

Yet, to achieve consonance throughout all layers of the web, I would want: (a) the "canonical form" of a source text to be defined in terms of the DOM model, not as a grammar over character strings; (b) the domain of syntax trees of the language to be a restriction of DOM values -- and then we can judge its quality by how conveniently it uses the DOM structure; (c) in *everything but* parsing inputs or creating reports, use the DOM models, not the character string models.

Here is an example: Suppose that I look at the nice API from Amazon and I realize that, in my case, I'm going to want to generate these payment instruction programs automatically, deriving them from other stuff I have in my database. I need two things: First, I need a templating system to create "parameterized" source and source forms. In the web world, such things abound at the XML level but if you are just trying to splice strings, that's error prone and awkward. Second, I need a harmonized tower so that, for example, the notion of datum equality is the same when I'm generating code as what it will be when that code is evaluated -- I want "simple expressions" to be the same all up and down the tower.

The easy way to get those wins, in the web design patterns, is to specify just about everything in terms of DOM structures first -- and then give alternate syntaxes as "presentation layer" optional features. That, and to use XPath/XQuery as the basis for your idea of "simple [pure functional] expressions".

We'll see, I guess, how bad it is to ask people to edit DOM objects to build syntax trees directly rather than editting text files that have to be parsed into such. My betting money is leaning towards DOM structure editors as the future of surface syntax, for the most part. (There will always be a tiny "flatland" space of text-based programming that explains how this DOM-land is bootstrapped.)

"The world [wide web] is my lisp machine AND my Xerox Star," --anon

Good Point

That's a good point about generating the statements. I can imagine that there exists a really good DOM or XSD that is an exactly equivalent syntax to the language. I don't think it would be hard to add this as an alternative format for the payment instructions. Unfortunately I'm no longer on the project and have no say in whether this happens or not. I don't think translation from an xml DOM to the base syntax tree would be a difficult task. Likewise I think it would be doable to write a generator for both formats from the syntax tree, so there could be automatic and lossless translation between the two formats.

DOM syntax models

I can imagine that there exists a really good DOM or XSD that is an exactly equivalent syntax to the language.

Sure. But please don't stop there. It's "trivial" when you put it that way -- saying that syntax is trees and DOM is trees so syntax can be DOM.

The key in the domain of web services isn't just XML-izing things but also creating "consonance" about choices of primitive types and the semantics of primitive operations.

The lovely work reported in this topic gives us a good example: timestamp values. The XPath/XQuery standard types and functions give us some atomic types for timestamps, complete with surface syntaxes and primitive operations (e.g. a rigorously defined ordering of timestamp values). Well, a new web service API could include those definitions by reference -- or it could take another route. Perhaps it could use SQL substitutes for the types and primitive operations. Perhaps it could use lisp substitutes.

You can put the syntax in XML just fine, regardless of how consonant your design is. But -- if you converge on the "standard types" then every conformant XML processor can operate on these values directly, using the ordinary functions for such, and obtain semantically robust results.

I'll be hand-wavey here (sorry): The PLT community has armed itself with huge amounts of really excellent theory. There are plenty of scattered success stories and a few deep ones but, really, doesn't it seem like theory is way ahead of practice for the most part? My vague impression is that we're entering an "era" within the PLT community when there will-or-should be a shift from theory to practice -- that the craft of expressing the big PLT ideas of the past decade or two in a pragmatic form will have high value.


Disclosure: Yup, I'm still working on my first-ever topic submission for LtU and, yup, it's very much in the theme of my remarks here. I currently think that what I'm going to wind up saying is, roughly, "Here is this thing that works beautifully, but just barely. To make it work really well is an extended exercise in PLT theory. Here be lots of jobs."