Languages with 'unique' programs

Has there been any PL research into languages where every semantically distinct program has exactly one representation in the source language? In other words, a language where for any given set of inputs, any change in the source text will create a program that for the same set of input produces a different set of outputs than the original program. Basically taking Python's "there should be one-- and preferably only one --obvious way to do it" philosophy to its logical extreme. I'm interested in how limiting this is on what you can express, since for any two equivalent algorithms only one of them could be written in the language. Is there an established technical term for this I can Google?

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

This is not possible because

This is not possible because of various incompleteness theorems.

Could you expand a little

Could you expand a little more? I can construct trivial languages for which this is true, e.g. my language only has the token 'I' which means +1. So "I", "II", and "III" are 'programs' returning 1, 2, and 3 respectively. Obviously that one's not very interesting, but my point is there has to be some qualifier on what you mean by not possible. Do you mean not possible for a Turing complete language?

Turing complete yes. If the

Turing complete yes. If the language is Turing complete and each program is unique then the program that doesn't halt is unique. Hence you can trivially decide whether a program halts by comparing it to the unique non halting program. This is a contradiction because the halting problem is not decidable for Turing complete languages. So such a language cannot be Turing complete. Even worse, much weaker notions of computation like context free languages have undecidable equality. So they can't even be expressed either.

non halting program is unique

I like the argument but why is the non halting program unique? I can think of lots of non identical non halting programs.

But in a "language with

But in a "language with 'unique' programs", there can only be one non-halting program - assuming there are no side-effects other than the result.

one program is not all programs

I saw that, it doesn't solve the problem in the proof. There is no theorem that says that you can never determine if a program is non-halting just that you can't always determine it. So it is entirely possible that for that one program in particular you can determine that it is non halting.

while(1) {x = random}
clearly is non halting while

PRINT a non trivial zero of Riemann Zeta function

is rather unclear.

How would one express "PRINT

How would one express "PRINT a non trivial zero of Riemann Zeta function" in that hypothetical language with exactly one non-halting program?

More formally, suppose that the programming language (modulo the usual theoretical niceties) is Turing equivalent. Rice's theorem then tells us that it is impossible to always determine whether a program in that language halts or not (or any other non-trivial property)... Or, directly, program equivalence is reducible to halting and vice versa.

who knows

First off I'm unclear if there is one non halting program. I agree there is one representation for any finite output by the given. It isn't so clear there would only be one non halting program though, things don't carry smoothly from the finite to the infinite. And if we assume there is then I agree the Riemann Zeta example works backwards to prove the language isn't Turing complete.

As for Rice... I think Rice does prove the needed result directly.

Not relevant. You are

Not relevant. You are thinking about "lots of non identical non halting programs" in a language that is NOT a language of unique programs. If we were in a language with unique programs, the problem would already be solved (there is only one program whose behavior is non-halting).

Your argument seems analogous to denying (anonymous typesafe-by-assumption language)'s type safety by saying "I can think of a lot unsafe C programs." It doesn't make sense.

Perhaps you meant to argue that we cannot *translate* programs of arbitrary Turing complete languages into a language with unique programs. If so, you repeat Jule's point, and my own - the real proof effort is regressed to translation.

No not what I was thinking

Here is what I was thinking.

I have a language L. I have a map m which takes some programs P to their representation in L denoted pl, i.e. for any finite p in P, m(p) = pl. I have the property that if p and q create the same output for the same input always , then if both pl and ql exist, pl = ql. That's the given.

I have a non halting program n. m(n) = nl is its representation. I see no reason that for a particular nl I might not be able to determine if it halts. Moreover there is no reason a non halting program is unique. Infinite objects have properties that finite objects cannot have.

You seem to be decoupling

You seem to be decoupling input from the program, i.e. so we could have programs that halt only for even inputs and other programs that halt only for odd inputs. I suppose this could work.

In a language with unique programs, it would be contradictory to have (f 42) be equivalent to any other program, such as constant propagation of 42 into f. So we must be rid of either the notion of inputs, or semantics, or the notion of constant terms. There will be only one non-halting program that takes no inputs - i.e. the non-halting term.

yeah the author is vague

This is a good point and you are right I'm reading it the opposite way. If f and g are both functions of one variable: and (f 43) and (g 31) produce the same output I wouldn't assume that (f 43) and (g 31) have the same representation in code. I'd assume the uniqueness applies only to f and g without arguments.

Because otherwise I'd have something like:
h n = if n then f n
else g (- n 1000)

I wouldn't assume that (h 1031) has the same code as (g 31) since f is embedded especially for arbitrary h. Anyway the more we talk about this, the fact that uniqueness is hard to define is a serious argument against it being possible.

Well, while I, too, don't

Well, while I, too, don't think the OP's question aims in a reasonable direction to follow, I also maybe(?) understand the OP's underlying concern.

There is indeed an unfortunate impedance mismatch between Turing Complete languages, context free grammars to serialize them, and, say, our brains' expectations.

Just think about it (we'll never emphasize enough how important is the full extent of Godel's, Turing's, Church's, and others', results):

the main result really is that the halting problem is undecidable under Turing machines. That is, also, including under the universal one, that can simulate all others, and of course which may not halt while trying to decide that the program to be executed by the TM it simulates ... can halt or not.

"Concretely" and brought down to the simplest I could find (in my own understanding anyway) we can also restate Turing's result as:

given an arbitrary finite alphabet-based language L, and the free monoid it generates, L*, then:

the function

(L -> N) -> N

can be seen as a set, representing all the possible ways to rewrite phrases of L into an executable encoding, that is, executable by a unique, conventionally chosen (everybody would have agreed upon) version of the utm. Though it would be utterly impractical, our world could theoretically still have no more than just one utm physically running somewhere. And "N" being the natural (or the whole, with zero) numbers, that any (finite) subsets of which can obviously be encoded by (finite) binary strings for the utm's input and output.

The function is largely partial of course: some elements of L* won't be able(*) to be, also, elements of another language, say, CFG, subset of L, defined by a context free (or not) grammar suitable to express a Turing Complete language.

(*) that is, at least, not in any practical enough way we can think of for L* in a given range of time.

Lookup utm on Wolfram Alpha, for example, for reports about the simplest possible forms of the utm (we can wire in hardware) found so far.

So this also means that assuming a word w has been verified to belong to both L* and the CFG's language generated, you're still faced with the responsibility of having chosen EXACTLY ONE, "and once for all", ONE essential thing to make that word w of CFG meaningful to be run on the tangible computer/utm :

that is, the semantics we had to pick up, as humans, for the subset of L* that happens to be also the one generated by CFG.

And that is the crux of the undecidability issue for the halting problem under Turing Machines precisely because this "problem" / inception step of choice of the CFG's language semantics corresponds exactly to the first arrow in the first map in parentheses:

(L -> N) -> N

With an infinite set on the right of that arrow. Doesn't matter it is "only" countably infinite, it is before anything else, infinite, just like L*.

In other words, one has to know exactly, and be faithful / stick to this knowledge, of how every single phrase of ANY of all the possible CFG languages is to be rewritten to be executable by the utm.

Too bad : we just saw that this notion of computation / computability, in the Turing Machine-equipped setting thus picked up, ESSENTIALLY and DIRECTLY depends from an INITIAL choice to be made among ... an infinite collection (N) for the ways to rewrite in executable form, JUST AS WELL (... as we have an infinity of languages L to start with) !

Thus, there is exactly just "one too many infinity" (of N) to deal with which prevents that "program" to exist, a "program" that would be able to decide when all programs halt or not, that is, including itself, and the executable form of which CANNOT be uniquely defined, or even be picked up from a finite set.

This essential affinity of the halting problem to the (N x N) product that makes its imaginary program impossible to exist for Turing Machines at least is also well explained here, IMO (the other explanation that made the most sense to me so far, besides my own interpretation, more oriented with languages and rewritings).

As for the other outer map/function, with the second arrow (also a partial function, of course, in the general case) it is just the one representing how the (specific) encoding of one (specific) program written in L*/CFG behaves when executed over its input; i.e., which particular partial function that (specific) program implements, when this element of (L -> N) -> N is reconsidered from the perspective of seeing it as some Input -> Output or D -> C map, from domain to co-domain.

I think there is an even simpler way (analogy) to look at it, should we decide to drop every notions of maths, yet still meaningful:

the halting problem, it's like for a lost foreign tourist P (who's trying to find his/her way) to start by attracting the attention, first, of a good samaritan H able to understand his/her language, though H is not assumed to be A PRIORI a local of the city being visited, and then for H to figure it out on P's behalf, thanks to one of the locals who know the place inside out...

Problem is : assuming ONLY H is allowed to take initiative after hearing P's call for help, there is no way, A PRIORI AS WELL, for P to know whether H, himself, EVEN speaks the locals' language in the first place, and if H could actually use one of them ... or is just another lost tourist who just happens to speak P's language ! (no way to know, i.e., not without the presence of some sort of Oracle, or other decision clue, for P to pick up the right H, upfront)

Again, that's how I understood the halting problem's result anyway.

But that being said, and back to the OP's post, it just witnesses in its own way, I think, this ever lasting, bitter "disappointment" we all have re: context free grammars (used to implement Turing Complete languages or not) and how they relate to what we define as computable by a TM:

some DISTINCT programs, while looking VERY MUCH alike each other, are different precisely because they are reducible, in the end, to (more or less) different encodings of themselves for a hypothetically UNIQUELY chosen utm (again a priori, purely arbitrary, for the sake of being able to know about their difference).

And at the same time, conversely, SEEMINGLY different programs (belonging to the same CFG's generated words) are actually, provably, implementing THE SAME partial function, from the Input -> Output map point of view at least, but STILL NOT reducible to a same, unique recognizable encoding of them over the utm !

Just because of that leftmost first arrow above. QED and ... Ouch ! although, thank you to Godel/Turing to have noticed this for us as early as they did. :)

But that's where it's somehow too sad again because that brings the "social" side effect (for computer scientists and software engineers :) of having languages' distinct IMPLEMENTATIONS be the culprit of not only historical and accidental complexity, but also HIDING de facto, very forcefully, most of the time, which partial functions implemented are actually the same from a strict logical/maths point of view, and likewise, which are observably, deceptively different, eventually, in disguise (of how much they can look alike in their source language).

I believe it's in essence what the OP is finding regrettable, and indeed it is (and should encourage us to seek to do better, I agree) ... while also looking, I'm afraid, in the wrong direction to try fix this fundamental issue (no offense meant).

Otherwise, is really the halting problem ABSOLUTELY undecidable regarding the concept of computability in a broader, looser sense?

Unsure. After all, hasn't Godel HALTED our seek for a formal system that would be both complete AND consistent regarding our systematic, conventional, and formal decision processes to decide between Truth and Falsehood of statements (over given models) ? Well, he sure did so for mathematics we'd like to delegate the discovery of which to machines at least, as we know.

So then maybe (IMO) the halting problem was just undecidable under Turing Machines (at the most) but was otherwise decidable for at least one human brain (Godel's).

My belief & my 2cts. :)

Btw, I'm currently especially interested in how we can relate this categorical form we can recognize easily:

extract : (M -> T) -> T, and its usual associate:

extend : ((M -> U) -> T) -> (M -> U) -> (M -> T), both of the comonad pattern,

towards hosts/guests languages phrases' inclusion instances, for (maybe?) constructive collaborations :)

(I mean, just pretend/make M to be L, and T to be N, and U to be a given, fixed semantic identifier, e.g., a URI in a closed world assumption (which happens to be the case for resolvable URIs), and you have de facto put yourself in a (scalable enough?) function comonad-based language integration design context, in the most general setting we already know about, Turing Complete, of our good old untyped lambda calculus friend :)

Not Turing-Complete?

Is that even possible in a Turing-complete language? Given a program p, I can write a program p' that includes a TM simulator and runs p on it, so their output will be the same.

Which leads me to ask: what interesting non-Turing-complete languages are out there, that people actually write in? Some DSLs, certainly, like regular expressions. Anything more 'universal' in application, or less well known?

No mainstream languages

No mainstream languages certainly, but Coq is guaranteed to terminate in finite time, which necessarily means it's not turing complete.

what interesting

what interesting non-Turing-complete languages are out there, that people actually write in

Coq, Agda, Lustre, Esterel, Funloft. It's actually easy to write "complete" programs in many incomplete languages - you just feed them a never ending stream of tokens or time at the topmost level. To the extent you design for it, you can guarantee progress and other nice properties for each round or step. It's a useful design for real-time languages, for example.

Being not Turing complete is less problematic than the inability to abstract interpreters. ;)

One could even continue

what interesting non-Turing-complete languages are out there, that people actually write in

One could even continue David's list with, e.g. : SQL.


SQL is Turing Complete,

SQL is Turing Complete, since 2008.

So, that seemingly took it

So, that seemingly took it 38+ years, from its inception, to enter the Turing Completeness realm:

No refactoring or abstraction

It would be an awful language to program in. And necessarily incomplete, too, otherwise you could write an interpreter that gives you multiple ways to write programs.

I think that's an extreme you don't really want to explore, beyond recognizing it. We need meaningless distinctions in structure - preferably predictable, consistent ones like associativity, commutativity, idempotence, alpha equivalence. Those provide lines for abstraction, refactoring, modularity, reuse.

That we "need meaningless

That we "need meaningless distinctions in structure" is a very interesting point, but I'm not sure I agree. We need them in current languages because for any given solution there are infinite ways to express it, and a very small subset of those are easy to read for a human. So we use the "meaningless distinctions" to try and rewrite hard to understand solutions to equivalent easier solutions. That doesn't mean we need meaningless distinctions, what if a language could be constructed such that that easy solutions were the only ones you could express?

easy solutions were the only ones

what if a language could be constructed such that that easy solutions were the only ones you could express?

Then you'll have a language that solves only easy problems. (Simple isn't easy.)

Another possible application

Another possible application would be for proving that two algorithms are equivalent (translating each of them into this language and comparing the source). Which could be useful for a compiler.

Finding this translation and

Finding this translation and thus proof of equivalence would not be any easier.

How do you know that it

How do you know that it wouldn't be any easier? Is something like this already a studied approach?

Your language would just be

Your language would just be a representation of the proof. The actual proof process would be the translation effort. Translating to your language would also require solving the halting problem, so is actually more difficult than simply a proof of equivalence. And, yes, the translation of programs to a simply structured normal or intermediate form to aide proofs is a fairly standard technique, AFAICT.

It's only equivalent to

It's only equivalent to solving the halting problem for translating turing complete languages. There are plenty of interesting algorithms that can be expressed without Turing completeness. Could it be easier within that niche?

Why would it be? It isn't as

Why would it be? It isn't as though your language is performing the proof. The actual problem has been regressed to the translation task.

Suppose your language can

Suppose your language can represent some subset V of all possible programs U (U = all Turing Machine programs). Your problem is: given some program in U, is it in the subset V? Equivalently, can it be translated into your language?

Answer: your problem is undecidable, by Rice's theorem.

What if U = all programs

What if U = all programs that don't necessitate Turing completeness?

Then how do you know, a

Then how do you know, a priori, that your program is in U? It has to be written in a language that is already not Turing-complete. And not even context-free (in context-free languages, equivalence of two programs is also undecidable). So you're restricted to programs written in regular languages or in some superset of those that is a proper subset of context-free languages.

For such languages it's easy to generate your "unique language". Given any language L, define the "unique" language Lu as follows: number all possible programs in L (e.g. sort their source code strings); for every set of equivalent programs, take the one with the lowest number as the 'unique' representative in Lu, and declare all the others not to be in Lu. To transform any given program from L to Lu, compare it to all programs with a lower numbering.

Very good point :) I need to

Very good point :) I need to go review my Sipser, I'd forgotten about equivalence of CFG's being undecidable.

Edit: Also I think what you just suggested for Lu is equivalent to what Jason is saying about simply typed lambda calculus further below.

This just doesn't seem tenable

A language with the desired property can't even have/express both multiplication and addition (since "x + x" and "2 * x" would be equivalent programs). It can't have/express both a function and its inverse (which rules out things like logical negation).

You'd either need to get rid of literals or introduce some kind of rule to ban expressions composed only of builtin operations applied to literals (since these could be constant-folded into an equivalent value).

You'd either need to ban if-then-else conditionals, or have some kind of rule that bans the 'then' and 'else' branches from being equivalent. You can't allow for an loop with a known trip count (since it is equivalent to its unrolling), so you add more rules. At some point the weight of such auxiliary rules will add up so that you end up with something more rather than less complex.

Most importantly, as dmbarbour notes, you can't have any abstractions in a language like this, and a language without any abstractions is probably too impoverished to call a programming language.


I'd thought about the constant folding problem. Rather than add more language rules it might be possible to get something both useful and in the spirit of the original question by relaxing the equivalence requirement just a smidge -- say a language where programs are unique except under constant folding. The more interesting part is in programs having differing semantics, and programs that are the same after constant folding obviously have the same semantics.

a simple construction

Here's something the other posters don't seem to have considered yet. Observational equivalence is decidable in the simply typed lambda calculus (if I recall correctly). So, define a total order on lambda terms in such a way that it's easy to enumerate all terms less than a certain term. Now your language can just be simply typed lambda calculus except that it rejects all terms that there is an equivalent term less than.

EDIT: the total order needs to have only a finite number of terms less than any given term.

Your total order could be that t0 < t1 if t0 has fewer lexemes than t1 or if they have the same number of lexemes then if t0 is lexicographically less than t1.

No matter what total order you choose, your language will be able to express all that the simply typed lambda calculus can express, and if you use the above total order it enforces that you write the shortest possible program.

The same construction would apply to any language with decidable observational equivalence.

You don't need to do that;

You don't need to do that; the constructive proofs of equivalence decidability actually more or less directly correspond to a rewriting of each term in a canonical form that can be compared quite directly. You can use static rules to impose terms to be in this normal form; see the work on hereditary substitution in the LF/Twelf community for example of how to adapt the metatheory in such cases.

How intentional can you be?

What does it mean for two programs to be "the same"? You gave an extensional definition, where two programs are "the same" when they give the same output to the same inputs. Are quicksort and bubble sort "the same" program? Is there a good reason why your language would allow you to write one rather than the other?

Programmers, contrarily maybe to most mathematicians, are not only interested in the "result" of programs, but also in their behaviors. How fast do they run? How much memory do they compute? What is the power consumption of the computer running it? Were your definition of "the same" too restrictive, you would lose all power to say anything about those less apparent aspects. The ultimately fine-grained equivalence between programs is that they have the same dynamic semantics (eg. as a space of one-step reductions if your semantics is defined in this way); it's nice to know how to consider coarser equivalences in the common cases where they're enough, but maybe you don't want them forced upon you. That's what equivalence relations are for: being different from equality in an interesting way.

(Given a program, you can make some of those intentional aspects extensional by translating it into a model where use of CPU, memory, etc. are explicit effects -- through monad, capabilities or what not -- rather than implicit consequences of the dynamic semantics. But that supposes you have this program to start with, with some intentional behavior to make explicit.)

Performance Abstractions

You raise a good point. I would note, however, that we can turn any intensional property into an extensional one by modeling the computation itself as part of the result. I.e. instead of:

Domain -> Range

We get:

Domain -> (Range,Computation)

We describe the computation in a tree-like structure. Or in the trivial case, every program prints itself AND provides the result we care actually about.

Within the proposed language we would be unable to throw away the computation value. But if we know it's going to be thrown away by some program in an outer language, it could be optimized away.

Sure, just trivially

See Mitch Wand's paper "The theory of fexprs is trivial." Short story - programs are only observationally equivalent if they are the exact same program. This is pretty much a warning for programming languages to not be "too reflective."

Keen Point.

Similarly, we could create a language where every program's behavior is to print itself. In that case, every distinct program will have a distinct behavior.

I don't think that's true.

I don't think that's true. With fexprs *terms* are only observationally equivalent if they are the exact same term. Whole programs can be observationally equivalent without being the same, for example (display 2) and (display (+ 1 1)). With fexprs the context of a term could read and print its AST representation, so indeed for terms that's true (but a rather trivial observation).

Programmers only think in terms, not programs

Since "programs" include all libraries, runtime, operating system and chipset. Observations of programs are defined as part of the semantics (say, result or just termination), so programs being "observationally equivalent" is degenerate. Only equivalence of terms is interesting. If your algorithm/function is unapplied, then it's waiting for a context to use it.


Almost. The general distinction is best described, I'd say, as between not programs and terms, but terms and expressions. (I, of course, prefer to explain this with reference to Lisp. :-)

By "term" I mean an element in the computational calculus (operational semantics). By "expression" I mean a term that can be represented by source code. Here are two expressions.

(+ 1 1)

All expressions are normal forms in the calculus; that is, irreducible terms. One of the two expressions above is an integer, and the other is a list of length three. However, if these two expressions were both evaluated in an environment with the usual binding for symbol "+", you'd get the same thing. Let's use a calculus term

[eval (+ 1 1) e]

to represent evaluation of term "(+ 1 1)" in environment e (which I'll assume has all the standard bindings and is immutable). Now we can clearly state that of these four terms:

(+ 1 1)
[eval 2 e]
[eval (+ 1 1) e]

the first, third, and fourth are contextually equivalent — there is no possible way of distinguishing them from each other, no matter what context one puts them in — while the second is not equivalent to the others.

For example, here are three more terms that are equivalent to each other.

(* 2 3)
(* [eval 2 e] 3)
(* [eval (+ 1 1) e] 3)

Of these three, the first is the normal form of the second and third. (So we can explain the relationship between "2" and "(+ 1 1)" using contextual equivalence, rather than resorting to the non-Church-Rosser notion of whole-term equivalence.)

Now, what does it mean to say every semantically distinct program has exactly one representation? Perhaps it means that for every irreducible term N, there is exactly one expression S such that

[eval S e]  →*  N

But suppose we have fexprs, and we've a standard binding for a fexpr $quote that takes one operand and simply returns that operand. We then have a theorem that, for all terms T,

[eval ($quote T) e]  →*  T

So for every normal form N, one possible program with that semantics is ($quote N). If there's only one program with that semantics, ($quote N) is it. And it's really, really boring.

[edited for a couple of egregious typos]

Given any deterministic

Given any deterministic (confluent) terminating (weakly normalising) language L, it's straightforward to generate a semantically equivalent language L' that satisfies the uniqueness property. L' is just L restricted to necessarily unique normal forms.

If L is some standard typed lambda calculus (e.g. System F, F omega, calculus of inductive constructions) with beta reduction, then you can even give a straightforward syntactic characterisation of L'.

Of course, no-one in their right mind would want to program in such a language because it would necessarily have poor support for abstraction, as normal forms aren't generally composable, e.g., just because M is in normal form and N is normal form doesn't mean that M N is. Add back in the syntactic sugar for the abstractions you want and you'll end up back with something like the language you started with.

uniquely represented data structures

It is possible to have uniquely represented data structures - structures that have the same canonical (internal) representation - indifferent to the order of operations applied to them (hence, no canonical program).

Patricia Trees and Tries are examples of such structures, together with Treaps and Skip Lists (with deterministic hashing).

I believe this is the best you can do.

That's actually a very

That's actually a very interesting correspondence. This could be a way to have a looser constraint with the same spirit -- a language of operators where programs are equivalent when they 'compile' to the same trie.

BDD canonicalization

There is actually a thriving industry exploiting exactly such unique/equality conditions, though as stated several times one has to be short of full Turing-computable capabilities.

The technique I'm familiar with relies on Binary Decision Diagrams, which can represent digital logic (which is itself equivalent to a simple if-and-boolean language). BDD's have a canonical representation, so any computation one can express in this simple language (like, say, circuit logic for a chip) can be compared for equivalence.

I saw these techniques in the formal verification space, specifically ACL2 (work by Bob Boyer and Warren Hunt Jr.) and some semiconductor design foundries leveraging the techniques. If people are interested, I can dig up some pertinent papers, though I'm familiar with but a small fragment of the domain.

There are, of course, plenty of unique data-value systems, but the above shows how such data-equivalence systems can and are used to represent programs.

(Edit: got Warren's name wrong.)

i think you mean

Warren Hunt Jr.

you're right

And fixed. I caught some of William Cook's writings recently and I think that just slipped in sub-consciously.


That's also how proof tactics work in languages like Coq and Agda. To proof that two expressions are the same, you only have to prove that the normalized expressions are the same.

See this Agda Monoid solver for example.

I can imagine the FAQ for this language

Q: I'm trying to write a program to print the 1000th digit of pi. How can I do that?

A: print 9


Well said!

Nice one


As far as I'm concerned...

I actually dream about a language whose expressions could act like some kind math expressions. It would have some normalization technique that would bring beginning programmer's idea (one of possible implementations) to its simplest (unique) representation of beginning implementation.

I admit that there can be some desires about actual internal realization of this simplest unique task specification, but this should not be mixed up with task definition. I hope that there could be a calculus for normalizing task requirement, but I also suggest to implement one more thing: to give to programmers a choice of how to build and execute a solvation to a task required to be solved.

So I recommend to make a distinction between task requirement and solvation technique. Finally, I believe that task requirement can successfully be brought up to mentioned unique form.