Erik Meijer started a discussion on Domain Specific Languages

Erik started an interesting post, but since not everyone can see it on Facebook, I've reproduced it here. I apologize for any errors in transcription.

Erik Meijer: Oh no, Wirth, Liskov, Allen are pimping DSLs. Please elders of the PL community, don't teach our grasshoppers the wrong ideas ...

Mark Hamburg: Link?

Erik Meijer: panel at ACM Turing celebration (#ACMTuring100)

Rastislav Bodik: Look who is speaking, Mr. Rx. :-)

Damien Guard: DSLs are bad?

Erik Meijer: I fully ascribe to Hudak-style embedded DSL, which really just are well-designed APIs. External DSLs on the other hand are like puppies, they all start out cute and happy, but without exception turn into vicious beasts as the grow up (make, XSLT, regular expressions, ...).

Michael Maximilien: Like all (most) puppies, it takes constant care and feeding over long (especially from start) to end up with a well behaved, integrated dog... Unless you are the dog whisperer ;)

Rastislav Bodik: ‎I can agree with your comment, mostly. While technically correct, calling Rx an API does not do it full justice, I think. It distracts from the fact that Rx introduces abstractions that are composable, analyzable, and --- in an alternative implementation --- also compilable. And this is the point that I was hoping the Turing elders emphasized: regardless of the implementation (API or external), DSLs provide systematic building blocks for today's complex software.

Rastislav Bodik: On the question of cancerous growth of external DSLs, I also mostly agree, but I came to believe that making a language internal helps only so much. Three other factors for a good DSL are the existence of a theory (observable-iterable symmetry in Rx; attribute grammars in parser genrators); whether you need to compile the DSLs (having to do so restricts reliance on the host language); and the experience of the designer (I can share with you some beast internal DSLs from cs164). But this is already too much info for facebook...

Will Cook: I disagree. I think the problem is that we have never really taken seriously the kind of tooling (and theory) required to define and integrate families of external DSLs. But language workbenches are starting to make progress. My Ensō project (with Tijs van der Storm) is based on integrating external DSLs with interpreters (not compilers/translators) . There are significant problems with internal DSLs as well.

Rastislav Bodik: Let collect these comments into a workshop proposal!

Erik Meijer: The cost of building a great language experience is extremely high, language workbenches may help some but I learned that a large part of the fit and finish required to make a truly great experience is hard to automate/generalize. Also we should not underestimate the (informal) ecosystem of documentation, books, samples, blogs, courses, ... All that leads me to believe that "worse is better" and we should do the best job you can by defining a great API/internal DSL.

Rastislav Bodik: ==> Sounds brilliant an timely, as this thread shows.

Will Cook: What about:
International Working Conference on Domain-Specific Languages (DSL 2011) (not sure about status of next one)
International Conference on Domain-Specific Languages (DSL 99) [Oddly, this conference is unrelated to the one above]
Agile Development with Domain Specific Languages

Will Cook: I understand that there are costs, but some of these costs are subject to better tools. I am focusing on creating and integrating a few DSLs for widely used domains, the kind that can be widely implemented and shared (SQL DDL, SQL, BNF, Attribute, Authorization, GUI, WebUI, Datalog, etc). For smaller domains and more narrow languages, embedded is definitely the way to go.

Yannis Smaragdakis: Ok, I'm late to the party, but it's a topic I care about a lot.
On extensible langs/workbenches: I'm afraid that there are "inevitable" DSLs (the "deep domain theory" kind, e.g., SQL, yacc), which will be successful regardless of how they are implemented and "trivial" DSLs that could well be APIs or lexical macros. There is little room between the two for large impact through good "workbench" infrastructure.

On lang design: I have to agree with Erik, language design is far from modular. It's more "integrated experience" and less "mix-and-match".

On whether external DSLs are ugly: I think they are, but so are death and taxes. There will always be a need for external DSLs. E.g., computation in different domains is just too diverse for a single control-flow pattern, even with higher-order functions or other clever constructions.

Will Cook: Yannis, you make some great points. I hope there is room in the middle. But don't give up on external language modularity yet :)

As a datapoint compare these approaches to internal and external bidirectional parsing/pretty printing on the dimensions of simplicity, usability, modularity, flexibility and elegance:

Invertible syntax descriptions: Unifying parsing and pretty printing

Object Grammars: Compositional & Bidirectional Mapping Between Text and Graphs

There certainly are advantages and disadvantages on both sides.

Yannis Smaragdakis: ‎Will, I am also hopeful there is some room in the middle for workbenches. But I'm pessimistic about external language modularity. It may be just lack of imagination: I recall several instances of language features that I thought were very nice and orthogonal, elegantly translated away, for which I was surprised to find out ugly interactions with seemingly unconnected features.
BTW, thanks for the pointers! They'll definitely be part of my summer reading! The parsing/unparsing problem is fascinating and I'm eager to see the state of the art.

Will Cook: I think part of the problem is the pervasive assumption that external DSLs must be translated/compiled. My hunch (research hypothesis) is that compilers don't compose well but interpreters can be composed. You are right that the composition must sometimes deal with interactions between features. Can list list the examples that you are thinking of?

Will Cook: Also WG 2.11 is now actively planning for another DSL conference. Current question is where it should be colocated. Here are the proceedings of the last one:
EPTCS 66: Domain-Specific Languages
If you are interested in helping to organize, please let me know.
There is also Code Generation
(Also, does anyone mind if I move this discussion to a more public place? LtU? Even though Erik knows everyone, I don't think FB realizes this)

Yannis Smaragdakis: Will, It's not easy to precisely recall or list examples of complex lang. features' interactions, exactly because they are weird and complex. But I remember several instances in my experience at the Intentional Programming project (at MS in the 90s). Things like Java-style exception handling, which seemed orthogonal to the base language, turned out to interact with other features in strange ways.

What I often bring up as a toy example (not representative of deep complexity, but easy to remember and explain) is the implementation of a "for" loop. It's easy to claim that "for" can be a high-level macro: for (init; cond; incr) body is short for: init; while(cond) { body; incr; } Only it's not. The body may contain a "continue" which makes the translation invalid. You try a different translation and "break" breaks. There is probably no valid translation without labels and gotos. I can't see this being a unique problem to compilation. It would be the same with interpretation. You'd need to expose lower-level control flow, or you can't express the new language feature.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Thanks for sharing. Great exchange

Thanks for sharing.

Great exchange already, IMO, that fits in one LtU post page.

I'd have at least one question for Erik, assuming he'd have enough insights to share about it:

how come the formerly code-named "Oslo", esp. surrounding the promising M, MGrammar and its Quadrant et al. peers, seem to have been either abandoned or put on indefinite stand-by by Microsoft ? AFAIC, that rather confused me, given the level of progress they had reached with it up until the 11/2009 CTP(*).

The rationale I have read so far w.r.t. privileging the tightening of integration of that technology into MS SQL Server never really convinced me, beyond the commercial argument otherwise. But maybe I'm still missing something.

EDIT
(*) Not to mention the interest it sparkled among some well-known contributors to software interoperability via (meta)languages.

Erik has seen this

I am pretty sure that Erik saw this discussion after I copied it from his Facebook page, but we will have to wait for him to respond to find out about your questions.

On interpreters vs compilers

I agree with so many of the points that Will and Yannis makes (and Erik's original point!) that I did not think I would have much to say. Except on hunch that compilers don't compose but interpreters can be composed, where I completely disagree as I see compilers and interpreters as being essentially the same thing, only differing along one dimension: time. As in when is the code actually 'interpreted'; this is usually called 'staging' nowadays, but is also the difference between static and dynamic in partial evaluation.

Will, have you read and understood finally tagless?

Explicit Staging != Partial Evaluation

Yes, but explicit staging (which is what a compiler does) requires manipulating some level of syntax objects, which makes composition more difficult, I believe. If you define "compilation" to be "interpreter + partial evaluation" then the problems with composition compilation go away.

Why are interpreters more

Why are interpreters more composable than compilers?

manipulating some level of

manipulating some level of syntax objects, which makes composition more difficult

Composition of syntax objects can be easy or difficult, depending on the target language. State machines don't compose very nicely, for example, but FORTH code composes nicely, and so do multi-agent systems or generalized arrows.

Compilers as a whole can at least compose by staging. E.g. `compilerFooBar :: Foo -> Bar`, and `compilerBarQux :: Bar -> Qux` can easily be composed into a compilerFooQux. But the more interesting question is often whether two Bar programs can be composed, i.e. if we also have `compilerBazBar :: Baz -> Bar`, can we develop a useful compiler for a mixed FooBaz program? Here, it is the composition of `Bar` subprograms that matters, not composition of compilers.

Compilation is not the same as interpretation + partial evaluation. Interpretation suggests a certain potential for incremental computation that is not implied by compilation.

I think I need to clarify

I think I need to clarify what I mean by "compilers don't compose". I do NOT mean that the definition of an individual compiler cannot be compositional. What I mean is that it is not easy to compose compilers for a modular language definition or to compose different compiler aspects/features of one language that are defined separately. The issue is that the interface between parts of a compiler involves the generated code, so the different parts of the compiler may have to cooperate to generate the right code. In other words, if A and B are compilers for aspects of a program P, then A(P) and B(P) are the code that they generate, then A(P) and B(P) must be integrated somehow. So yes, I am talking about the difficult case of a "compiler for a mixed FooBaz program" described by @dmbarbour.

Finally, I should say that "interpreters compose more easily than compilers" is a research hypothesis, not a proven fact. I'm investigating the question.

So a simple example

Say I have some language compiler targeting the MSIL backend. My language allows aspect-weaving. In pass one of the compiler, the compiler decides to generate a bne.un.s MSIL instruction, the short form of bne.un which means it can have a shorter label offset than a typical branching instruction. In the second pass of the compiler, aspects are weaved in at join points and I have accidentally violated the maximum branch offset allowed by the bne.un.s instruction by inserting too many intermediary instructions.

This is a typical scenario aspect weaving compilers have trouble with, as too often the languages use byte code weaving libraries that do not expose a suitable intermediate representation (too low-level, making it easy to violate stable interposition invariants such as short branches vs. long branches). Note that if I had an interpreter, there would be no "machine code" representation, typed or not, to worry about. The only composition issue is ordering syntax correctly.

As I understand it, Finally Tagless indirectly allows you to get around this, further refined through Atkey et al's Unembedding Domain Specific Languages. Atkey sets up the design argument in terms of two representations, de Bruijn indices and HOAS.

How broadly...

...do you mean partial evaluation? In my experience, the hard part of compilation isn't the beta-evaluation part (even under lambdas), it's the eta-rules and commuting conversions you have to do to expose the reductions. For example, suppose we want to optimize the id function below:

not : bool → bool
not b = if b then false else true

id : bool → bool 
id b = not (not b)

β-reduction gets you to:

id b = if (if b then false else true) then false else true

You need to apply a commuting conversion to turn this into:

id b = if b then (if false then false else true) else (if true then false else true)

Now, β-reduction of *this* gets you to

id b = if b then true else false

But you still need an η-rule to get:

id b = b 

Basically, the β-reductions/evaluation rules are oriented, so you always know which direction to apply them (modulo code size explosion issues), but the other rules are unoriented, and (1) in different cases you need different directions, and (2) sometimes you have to go through a worse program to get to a better one.

What I often bring up as a

What I often bring up as a toy example (not representative of deep complexity, but easy to remember and explain) is the implementation of a "for" loop. It's easy to claim that "for" can be a high-level macro: for (init; cond; incr) body is short for: init; while(cond) { body; incr; } Only it's not. The body may contain a "continue" which makes the translation invalid. You try a different translation and "break" breaks. There is probably no valid translation without labels and gotos. I can't see this being a unique problem to compilation. It would be the same with interpretation. You'd need to expose lower-level control flow, or you can't express the new language feature.

What about something like the following? No labels or gotos used, but an anonymous function is used. Syntactically, I'm using a pseudo-JavaScript style.

const BREAK, CONTINUE = -1, 0;
init; 
while ((cond) and (ret == CONTINUE and ret != BREAK)) { 
    ret = function () {
        body; // All breaks changed to return BREAK
        return CONTINUE;
    }();
    incr; 
} 

Pardon the redundancy of BREAK != ret == CONTINUE.

The lambda wrapping body could be in the condition in languages allowing mutation in conditional expressions.

Perhaps a better example would be representing while as a macro. That may need gotos and labels, if for is not defined.

External DSLs on the other

External DSLs on the other hand are like puppies, they all start out cute and happy, but without exception turn into vicious beasts as the grow up (make, XSLT, regular expressions, ...)

My experiences agree with Meijir's observation.

Embedded problem specific languages can depend on staged programming or interactive/callback patterns to cover corner cases and the data plumbing between problems. This allows their designers to focus on local utility - to amplify or control expression for the common cases.

External problem specific languages grow up to become independent... generally without the supervision of a skilled language designer.

compilers don't compose well but interpreters can be composed

Compilers can compose well with staged programming, but I expect you refer to the more interactive/reactive programming styles where we need ongoing data plumbing. The trick is to compile different subprograms to a composable language, compose, then compile further. Related: multi-level languages

thesis expansion encouraged

I hope to tease out more detail via questions, because it seems something is in there I want to see, and I think other folks may want to read it too. (Thanks for sharing, since I never visit Facebook ... off my radar and vice versa.)

I guess you're close to it, so stating obvious points doesn't cross your mind, even though basic premises in an area are often very interesting ones. If there's a common thesis in the OP thread, I'm missing it, unless it's that internal is better than external when it comes to a DSL. I'd love to hear a more explicit statement of scope. (I tend to skip thesis statements myself, and just put my concern early instead as a substitute.)

When I search for definitions of internal DSL vs external DSL, it seems fuzzy. As a subject of blog essays, maybe the distinction is new and ongoing? I'm guessing internal means local to another host language, so an internal DSL is rewritten so it's expressed in the host language. (If I rewrite C to very similar C after re-doing the target abstraction, is that an internal DSL? Hmm.)

Is Yannis's "death and taxes" comment on the ugliness of external DSLs close to the central point?

I expect systems to diverge when you complicate them, without any track to utopia (even if you look). That's the issue, right? You add something useful, and something weird happens and it's also often ugly. So the more external DSLs present, the weirder and uglier it gets. It's like each one adds entropy incompatible with that of others.

I suspect I'm hearing advocacy of more internal DSLs, in staged compilation, so things get more complex via added stages, and less because of weird fit between different languages. Yes? That feels right to me, which is why I ask.

Why embedded DSLs are preferred

1. Skips writing parser and interpreter from scratch.
2. Integration with the host language is trivial.

Note, however, that embedded languages do not offer any further benefits really beyond these. For example, Erik decried Alan Kay's (spic "Allen") demo of Frank, which is built from a collection of EDSLs. But why, exactly? For example, would it be worse to verify C than to verify 5 or 6 highly targeted and small domain languages? And what about tool-ability? Despite Haskell and functional programming languages allowing you to combine 4 or more DSLs to create something like the Yesod web programming framework, why isn't there a good IDE for building Yesod apps?

Further, let's compare say Nile (part of Frank) to Conal Elliot's work on composable shaders. Which is better? And why?

Also, "internal" or embedded DSLs are not re-written in a host language. They are simply written in the host language. To achieve this affect can be quite an effort and require a team of Ph.D.'s, due to challenges with sharing and recursion associated with creating "deeply embedded" DSLs. But sometimes a shallow embedding is nice, too. For example, terms that exist only to make code more readable and are actually ignored as function inputs.

Lots of interesting questions!

Its a good summary of the benefits of embedding. While there is a lot of interest in embedding, its not clear that it is generally "preferred" by the community. It's also not clear that it will win in the long run.

1. Skips writing parser and interpreter from scratch

My experience so far with the GLR parser that Tijs van der Storm wrote for Enso is that writing a parser involves very little effort. Write the grammar, and it just works. GLR grammars are full context-free grammars with no restrictions (left recursion, PEG style asymmetry, etc), and they compose cleanly. We have integrated declarative creation of object graphs (a generalization of ASTs) into the grammar, so the parsing to objects (and pretty-printing) is essentially free.

As for interpreters, they do involve some effort. But we reuse them because the interpreters are first-class parts of our environment, not closed boxes. Imagine packaging the syntax/semantics of expressions, binding, and comprehensions into separate reusable "language modules" in a language library. This doesn't exist yet, but it provides a good vision for how external languages could have powerful features as easily as embedded languages.

2. Integration with the host language is trivial.

This is a double-edged sword. In most of the recent work you mention, the current trend seems to be embedding DSLs in a way that supports multiple interpretations, optimization, and compilation. This tends to require more explicit representation of the DSL language (as an abstract AST, for example) so that it is amenable for analysis in addition to other forms of interpretation. When you do this, the boundary between the DSL and the host language becomes more difficult to cross. Allowing more of the host language to be reified into the DSL sublanguage is also an interesting research area. LINQ is a practical example of the benefits and pitfalls of this idea.

Other issues, including error messages, good syntax, and debugging, and language-specific tools are not so good for embedded languages.

The tradeoffs are significant enough that there is lots of room for research on many fronts at once.

NOTE: My understanding is that OMeta is an embedded language for grammars, but that Nile is a regular external DSL that is compiled to a lower-level language. The compiler is written using OMeta as a transformation engine.

Quick historical meandering

w7cook: Imagine packaging the syntax/semantics of expressions, binding, and comprehensions into separate reusable "language modules" in a language library.

This reminds me of David Espinosa's old semantic lego work, the MetaPRL extensible compilers work, and this semantics survey based on Winskel's text.

I feel like I must be missing something...

The proponents of embedded DSLs seem to find that they have many advantages, but the advantages I have seen listed in the past don't seem to hold up.

Yes, you potentially get benefits from not having to write parsing, semantic checking, optimization, code generation, runtime, and host-language interop code from scratch. But you slowly lose all these benefits when you try to do anything non-trivial.

As soon as you need to detect/infer/enforce any semantic property that can't be encoded in your host language, perform any domain-specific code transformation or optimization that isn't part of the host language, or generate code for a target that isn't supported by your host language (e.g., generate SQL code in LINQ, or GPU shader code in Vertigo, Liszt, etc.), you need to go ahead and build an AST and do most of the heavy lifting of implementing a compiler. Similarly, you increasingly need to write your own runtime and interop code as your embedded DSL gets more complex.

You may still get the benefit of the host-language parser and tooling, but that only goes so far when you realize that all your passes on your custom AST have to proceed without the benefit of good source location information, and the code you generate probably isn't step-through debuggable.

On the complaint that most external DSLs are of poor quality because they are created by people who aren't accomplished language designers, or who don't anticipate the scope of the task: that may be true, but why should that problem be particular to external rather than embedded DSLs?

Ultimately, we can just fall back on the idea that a good embedded DSL is "just a good library," but at that point what is being said? Obviously nobody is going to argue against good libraries, but it seems like there is something qualitatively different between an embedded DSL and "just another library."

My experience with DSLs is pretty limited, and admittedly narrowly focused on performance-oriented and GPU-targeted languages, so it is probable that I am missing something that the experts understand. I'd just like to know what that something is.

Embedding a DSL is

Embedding a DSL is advantageous, but not primarily because you can reduce some chores regarding semantic analysis, compilation, etc.

Rather, the advantage is the ability to focus the embedded DSL on a small problem space - handle 80%, 96%, or 99.2% of a specific problem or class of problems. This focus allows the embedded DSL to remain simple, or even to err on the side of over-simplification. Allow the GPPL to cover corner cases and provide glue between problems (e.g. via callbacks or on-the-fly code generation).

External DSLs cannot easily remain focused. They have no mother language to depend upon, and so they must reinvent much general purpose behavior. Even SQL is Turing complete these days. But these languages fight, deny, are ashamed of their new responsibilities. Rather than growing into a responsible adult GPPLs, an external DSL becomes a man-child of the language community.

A GPPL can be designed to designed to support extension and embedding of DSLs, make it easier to integrate syntax, tools, and debugging. The PLT group has done a lot of work in this area. It is feasible to standardize some aspects of embedded language development. A common API for debug-views could be leveraged directly by an embedded compiler and even by a meta-grammar.

An interesting possibility I explore for staged DSLs with live programming: mashable, composable widget-based grammars and syntax (with bi-directional debugger views and human interaction/gestures built right in) rather than dumb text layout. A widget-AST is compiled to provide continuous feedback through an IDE (at least while that fragment is in view) even while executing some behavior.

a good embedded DSL is "just a good library," but at that point what is being said?

It says that designing a good library API is the same problem as designing a good language. It says that programming is language manipulation. That's a useful revelation, I think.

It says that designing a

It says that designing a good library API is the same problem as designing a good language. It says that programming is language manipulation. That's a useful revelation, I think.

There is something more in Tim's post: "As soon as you need to detect/infer/enforce any semantic property that can't be encoded in your host language...". For a given DSL, not all general languages can encode it; you need a language that is expressive enough. This means that not all languages can write good libraries for this problem domain. You need expressive languages to write good libraries, and that's probably the main reason why we fight for more and more expressive languages¹.

¹: with, as a caveat, the remark by Ehud that languages that focus too much on general expressivity may have adoption troubles.

Expressiveness, as described

Expressiveness, as described by Matthias Felleisen, is a property that allows us to locally (i.e. within a syntactic bounding box) encode a behavior or property that would be distributed (non-local syntax) if expressed in some other language. A sufficiently expressive language should allow direct encoding of DSLs, as plain old functions, types, libraries. That is a valuable feature, certainly. All else being equal (though it never is), more expressiveness would be a good thing.

But expressiveness is not essential for making a GPPL a great host for DSLs. It is straightforward to cover a deficiency of expressiveness with one more layer of indirection, staged computation, perhaps a little shared state or inversion of control. The result is frameworks (and services, etc.).

Expressiveness is ultimately about covering "more" abstractions. But we could achieve more productive and useful languages, I think, if we can discover just the "right" abstractions and cover those very well. An alternative to seeking a more expressive language is to:

  • develop a language that is better at abstracting indirection, inversion, and staging
  • encode DSLs in the indirect style, as frameworks and staged computations

This is the approach I favor. Expressiveness is a property I've systematically reduced in favor of composition and modularity features. Those features - especially reactive semantics - make it easier to abstract and compose frameworks (at their interfaces) as if they were first-class entities.

We don't need more expressive languages to write good libraries or embed wonderful DSLs.

Static analyses should remain direct

You will indeed gain a lot of flexibility by adding a layer of interpretation in between. This flexibility is, however, in the dynamic rather than static semantic. If you want to, as Tim mentions, "encode semantic properties", you also need to work at the static level: you want to lift static knowledge about the DSL into static knowledge the host language can manipulate.

I believe the ability to represent static knowledge is also a desirable form of "expressivity"; if you are not able to accept a program that you know is safe by domain-specific reasoning, or on the contrary not able to reject a program that a domain-specific analysis would deem meaningless, you are less expressive. In a Felleisen-like presentation, I would require that the syntactic translations you're considering preserve typeability of the program (both positive and negative).

If you move from a shallow to a deep embedding, you will have more flexibility to implement arbitrary dynamic rule, but you're likely to lose static knowledge as a counterpart. "Well-typed programs do not go wrong": you need to type your pieces of staged program from the point of view of its final, non-staged semantics. Even with GADTs for example, you can offer a superficial level of typing for combinators, but representing terms with binders is non-trivial. I think the "Finally Tagless" style has been effective at showing us that we can do much with simple things, but that required a lot of ingenuity, and I'm not sure it can be systematically applied to all kind of static analyses.

Static analysis can be staged, too

Encoding of semantic properties does not need to occur all in one stage. The essence of static analysis is to reject some programs after analyzing them but before running them. This is quite easy to express:

compileAtoB :: ProgramA -> Either ProgramB ErrorsInA

With staged programming, the computation of compileAtoB, in which some errors are discovered, may occur in one (of many) explicit compile-time stages. At least for cases where we know ProgramA statically, such as when it is part of our source code.

There is an ideal held by many of the more mathematically inclined PL designers: to forbid errors in the embedded DSL in the same stage as some other static analysis of the host language. This might be achieved by a sufficiently expressive type system (e.g. dependent and linear typing). One could express "this compilation isn't allowed to fail" by a partial function that only handles the success case, or by assertion, or by type annotation.

But, even without expressive analysis, staging can solve the very practical goal of detecting errors at development time. And staging naturally results in more consistent, flexible, and reusable code (rarely need to specialize code for runtime vs. compile-time, or closed system vs. open system). Staging is simple, and results in simple systems.

"Well-typed programs do not go wrong": you need to type your pieces of staged program from the point of view of its final, non-staged semantics.

You are assuming a stronger goal than may be necessary. Rather than "programs [that] do not go wrong", consider "the ability to control when and where programs go wrong, and how wrong they can go". I don't mind programs that go wrong, so long as they do so at development time, on my machine, and without any dangerous authorities. My goals include productive development, ability to refactor applications into services, and to eliminate errors before they reach the customer.

What are your goals? Why are they your goals?