STEPS Toward the Reinvention of Programming, 2012 Final Report

It appears that the final report from the STEPS program has been recently released at long last:

STEPS Toward the Reinvention of Programming, 2012 Final Report

If computing is important -- for daily life, learning, business, national defense, jobs, and more -- then qualitatively advancing computing is extremely important. Fro example, many software systems today are made from millions to hundreds of millions of lines of program code that is too large, complex and fragile to be improved, fixed, or integrated. (One hundred million lines of code at 50 lines per page is 5000 books of 400 pages each! This is beyond humane scale.)

What if this could be made literally 1000 times smaller -- or more? And made more powerful, clear, simple, and robust? This would bring one of the most important technologies of our time from a state that is almost out of human reach -- and dangerously close to being out of control -- back into human scale.

An analogy from daily life is to compare teh great pyramid of Giza, which is mostly solid bricks piled on top of each other with very little usable space inside, to a structure of similar size made from the same materials, but using the later invention of the arch. The result would be mostly usable space, and require roughly 1/1000 the number of bricks. In other words, as size and complexity increase, architectural design dominates materials.

The "STEPS Toward Expressive Programming Systems" project is taking the familiar world of personal computing used by more than a billion people every day -- currently reqiring hundreds of millions of lines of code to make and sustain -- and substantially recreating it using new programming techniques and "architectures" in dramatically smaller amounts of program code. This i made possible by new advances in design, programming, programming languages, systems organization, and the use of science to analyze and create models of software artifacts.

..and there seems to be a new research activity with Alan Kay starting up (HARC which might at least inspired by STEPS (if not used a jumping off point).

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

Asymptotic

Inventing the arch sounds like a good idea, but tbh a scale improvement by a mere three orders of magnitude sounds kind of underambitious, especially if it takes a few years to accomplish. The laptop I'm using atm has more memory than the computer I learned on by about five orders of magnitude, not counting virtual memory (which that long-ago machine didn't have). The difference in secondary storage is more like six orders of magnitude. Seems we want improvement on an exponential curve over time.

what are the restrictions?

With bricks and mortar, there are gross physical constraints. Sure there are still physical constraints for our computing stuff (although: quantum) but they are different. Or are they? Could we measure when the code base is collapsing under its own weight? Etc. There are probably some metrics we can borrow as food for thought from other engineering fields. Then there are probably also ways in which we'd have to invent other ways of looking at code, as well.

What are the most likely things to help get exponential improvements? Somehow I doubt it is coming up with the perfect programming language. I would expect it has to be more along the lines of ML / AI / autogenerating stuff, and get the dumb humans out of it.

Well, a PL without its

Well, a PL without its context does nothing, as do chromosomes; it doesn't follow that either is unimportant.

Humans aren't stupid, as such. They have strengths and weaknesses, and frankly I think it'd be a bad idea to lose those strengths. The trick is to provide support for the weaknesses without hobbling the strengths.

Code size?

Did you catch that the quote is about code size? I don't see how it makes sense to ask for software whose size decreases exponentially over time.

How much is expressed by a

How much is expressed by a given amount of code.

Aren't those the same thing?


functionality = size * 2^t
size = functionality / 2^t


Yes.

Yes. I mean to suggest that the size of what you're expressing increases exponentially over time, therefore in order to keep the size of the expressing code the same over time (which one wants at least to the extent of remaining on a human scale) you need its functionality to increase exponentially over time.

re: space vs. software complexity

While you have several orders of magnitude extra space (storage, memory, etc.), that space is also divided across many different responsibilities: version history, parallelism and distribution, caching and partial evaluation, display and rendering resolution, connectivity, etc..

It's unreasonable to assume that scale improvements in code should linearly follow scale improvements in storage or memory. If it were a quadratic relationship instead, three orders increase in complexity might account for six orders magnitude in storage. I do not know what the practical and essential relationships are, but I doubt the relationship between space and software complexity is linear.

In any case, achieving three orders magnitude scale improvement within a reasonable time frame seems very ambitious to me.

I don't want reports I want a system

When can we get STEPS or is it never to be released?

Depends on the algorithm.

I think what is "intuitive" depends on the algorithm. Quick-sort has an obvious recursive divide and conquer approach, as do many other algorithms. On the other hand many simple operations on collections are best expressed as a foreach.

I think a function call introduces a new context, which a 'block' statement like 'if' or 'for' does not. I think coping with nested contexts has a higher cognitive load than a single context, but if you start having to manually push things into a stack, recursion is probably simpler. These two things are a balance and hence there is no single answer.

I think that nested folds and maps quickly get difficult to think about because the types of the objects returned are not visible. In these cases an imperative mutable collection where the type is stable throughout the whole algorithm is easier to think about. I think where we have matrixes and multi-dimensional constructs, explicit indexes are easier to deal with.

Incidentally? Screw folds.

Why the hell are they called 'fold' instead of 'map'? We had a perfectly good abstraction already. What the heck is this new syntax for and why the heck do people think it's a better idea? People started talking about 'fold' as if it were something new, and as far as I can tell they only spread confusion.

Missing context

Keann's post mentions map and fold, but I assumed it meant the standard thing. A search for 'fold' in the STEPs turns up nothing. Who is introducing nonstandard terminology for fold?

It was the supposed difference that annoyed me.

He mentioned maps and folds as though they were not the same thing. I think that's misleading, because someone reading it might get the idea that one (or both) of them mean something else.

News to me

Since when has fold meant map? Is this lisp lingo?

Started there, but not limited to it.

Fold, map, for-loop, while-loop, foreach, whatever you want to call iterating over the members of a collection.

People even went so far as to add 'list-fold' to scheme which did the exact same thing as 'map.' I could not understand how anyone could be so clueless.

Haskell, Miranda, Squiggol

In Haskell fold and map are different higher-order functions. That dates back to Miranda, and I think you'll be able to trace it even further back to notations such as the Bird-Meertens formalism or Z, possibly even further back.

fold, left-fold, right-fold, map all more or less have the same meaning and different definitions in most FP languages.

Not sure why you think they are the same, looks like you stem from a different tradition.

Okay, what's the distinction?

If you maintain that they are different things, what is the idea that makes a fold different from a map? Why would I call one rather than the other when I want to iterate over the members of a collection?

folds, etc.

A quick overview when it comes to lists:

  • Fold take a list and produces a value. The classical example is finding the sum of the elements in a list
    (fold + '(1 2 3 4)) => 10
  • Map takes a list and produces another list of exactly the same length, but with different elements.
    (map (lambda (x) (+ 1 x)) '(1 2 3 4)) => (2 3 4 5)

  • Filter takes a list and produces a shorter list.
    (filter even? '(1 2 3 4)) => (2 4)

  • Unfold takes a value and produces a list.
    (unfold (lambda (x) (> x 4)) (lambda (x) (+ x 1)) (lambda (x) (* x 10)) 0) => (0 10 20 30 40)

...The combination of fold and unfold are "universal" in the sense that you can write any list traversal as a combination of the two. Also interesting is that you can write "map" as a fold or an unfold.

Hmm, okay.

In traditional implementations where map runs left-to-right, I would probably still write it as map using a function closure whose state was modified by the calculation, then ignore the list return and ask the function for its state.

But it's been true for a long time that the standards don't require the traditional left-to-right behavior, so I guess there is a use for 'fold' as a way of stating that the sequence matters.

Or with the very silly example of '+' the 'apply' call would have been a much better choice.

I need to think about it... Worth a library implementation, I guess, although it seems fairly useless to me in the presented examples as doing something there is always a clearer way to do.

Do a bit of FP

The idea is that you can fold arbitrary (algebraic) structures into a single value. And, of course, the dual also exists, you can unfold a value into a structure.

Since a list is also a value it follows that a map can be expressed in terms of a fold.

If you do some functional programming you'll usually recognize that in a few weeks and you end up implementing lots of manners in which you fold different data structures.

Writing for the reader

Collection operators like map & fold are such core primitives to most FP (and esp. parallel programming) systems that they are key how to readers think. That, in turns, impacts how they read code by others. The primitives communicate both the shape of the input and output and the dependency structure of the computation. So, I would expect that passing in a stateful function to a map in a sequential system in order to exploit the scheduler semantics of a particular language runtime version would have a high chance of failing code review.

Why?

Fold, unfold, map exploit the algebraic properties of containers, more specifically that they can be seen as lists and be traversed in certain manners.

You can also exploit the algebraic properties of the operators you use to do the folding with. More specifically, if your operators abide to a number of laws then often you can efficiently parallelize computations. (Which is good for, for instance, Google and concurrent data parallel computations over extremely large clusters containing data.)

It is a bad idea to use a stateful operator to implement a fold, it breaks invariants and doesn't allow for optimizations.

But if your computation runs 50% faster and you can document it, I would allow it.

Algorithms not containers

Maps and folds get it wrong because they are trying to capture the algebraic properties of containers. Instead the focus should be on the algebraic properties of algorithms. We should study and group algorithms by the data access patterns they require, and then containers should have APIs that implement the access patterns required by classes of algorithms. If I have a quick-sort algorithm, a merge-sort, or a stable-partition, I want the containers to support the access necessary for each algorithm in an abstract way that lets me not care what kind of container i have as long as it supports at least the access requirements of the algorithm.

Why should programmers even see container algorithms?

In a high level language, you don't have linked lists and arrays and suchlike. In a high level language, you have References to make side effects work on different variables simultaneously, Maps to do key-value lookups, and Sequences for just about everything else. And it's up to the compiler (or interpreter) to make those containers match the way the program uses them. Anything it's possible for the programmer to not have to think about, the programmer should not have to think about. The programmer has enough work specifying what she means to do, and fewer, simpler concepts to make algorithms out of are a win. The compiler/interpreter can work out what underlying container algorithm to use.

It's not all that difficult to make a reactive container. The thing about structures where some operations have slow access, is that slow access usually involves traversing all of the data, or a substantial fraction of it. And if you're doing that, it's not much additional overhead, while making one of those accesses, to copy the data into a container that's more efficient for the kind of use the program is making of it.

Often reactive containers' representation switches can be statically compiled (in which case you'd more accurately call them proactive containers). If the compiler can tell, oh, in this section of the program a bunch of stuff gets added to the sequence, all by insertions, and then that never happens again - but in later sections of the program there are a lot of random access operators. Then the compiler can issue the instructions to make a list while data's being added, then transform the list into an array when the program exits that scope.

Even in programs that the compiler can't analyze that closely, you can still get statically compiled reactive containers given the output of a container access profiler to identify the scope boundaries where access patterns change and it's a net gain to make a representation switch.

And even in the version of the program that makes the access profile, or when compiling without a profile, you can still have a dynamically reactive containers that detect when to make the switches during runtime. It has some overhead in monitoring the access pattern and how well it fits the current underlying data structure, and switches when it becomes clear that the access pattern has changed. But that usually means you make a dozen or so inefficient accesses before the switch, so it works but it's not ideal.

Turtles all the way down

Basically agree with most sentiments on this thread. Trick is that, at some point, the programmer has to specify something, and the framework (or other readers) must reason about it. Maps, folds, etc. provide a lot of great structure for both autotuning & legibility.

Why should programmers ever see containers?

Still everything about containers, not about algorithms. The focus should be on algorithms. It is not about hash-map or array, it's about i need 'max' I need the greatest common divisor, I need to rotate, partition, differentiate etc. You start with the algorithms that you need to solve a problem, and then that tells you the APIs you need containers to support.

I really don't care about collections and containers the compiler can choose the most efficient. However a 'reverse' algorithm may need a bidirectional iterator, which may mean the compiler may not be allowed to use a singly linked list, but I don't need to know that.

Map and fold are focused on fitting the algorithm ( code) to the structure of the container. This is the wrong way around, we should be fitting the structure of the container to the algorithms we want to use.

+1

Map and fold are focused on fitting the algorithm ( code) to the structure of the container. This is the wrong way around, we should be fitting the structure of the container to the algorithms we want to use.

I fully agree.

Which compilers do this?

It's not all that difficult to make a reactive container.

Sounds lovely. Are you describing what a "sufficiently smart" compiler should be able to do, or does this exist in something that I can download and play with?

Automatic insertion of transformation between arrays, lists and maps raises a lot of questions about when the compiler can "analyze that closely" to make it work.

  • Arrays are ordered while maps are unordered - what does the interface look like?
  • If a program freely mixes operations of different types how does the compiler cluster together the appropriate representation, what cost model includes the transformation?
  • What type is the fallback container for when the compiler cannot perform the analysis?

Algorithms not containers?

Isn't this what C++ tries to do? Iterators and iterator concepts/kinds form the bridge between the algorithms and the containers. Even Python does this (it has generic API's for sequences, associative stores, etc).

Remember Wirth said programs are algorithms plus data structures. I doubt one can completely eliminate consideration of the data structures: they typically constrain the performance of particular operations.

Perhaps worse, underlying hardware makes a big difference too. It's not just caching but the very representation of things like integers and pointers as sequences of bits.

I personally would say most algorithms are in fact driven almost deterministically by the underlying data structure. This is because roughly, the stuff in an object is either a user value or a pointer, and pointers only admit one algorithmic operation: recursion.

As an example, I have just designed a data structure intended to track every live address allocated on the heap (i.e. keep track of malloc and free). It is intended to allow a GC to find garbage. It has to be lightning fast because it is used in every program. It has to have reasonable storage use. I came up with a 4 level system using 1. a linear sorted array (32 bit key), 2. a red-black tree (STL map, 16 bit key), 3 a fixed sized array (5 bit key) and 4. a sorted array with sub-key/leaf data pairs (8 bit key). Only the leaf data is ever deleted, no nodes are ever deleted.

The size of the bottom level key more or less HAS to be either 7 or 8 bits.

It outperforms Judy and STL Map on real allocation data for insert/delete but find_less_equal is a bit slow. The actual top level algorithms are more or less completely determined up to fine tuning by the data structure, and the data structure is more or less determined by the operational requirements. The performance comes from trying to match the design to the non-random distribution of allocations in real programs (especially ones that are garbage collected). Malloc tries to reuse freed blocks, otherwise allocation is sequential.

Wirth

I was going to quote Wirth on the basis that I find the focus on data structures to the disadvantage of algorithms to be a problem.

I agree all data structures are not equal, and on modern hardware you want to be using arrays as caching behaviour can dominate the 'O' of the algorithm.

My point was that to treat all data structures as a list and reduce all algorithms to map and fold is actually making things more complicated.

The idea of focusing on algorithms is more about how to design generic container APIs. You need to categorise all the algorithms by the patterns of data access, so that you can see how they should be categorised.

You are correct that this is how Stepanov designed the C++ STL, but C++ wasn't like that before the STL, it was just normal OO stuff. Stepanov points out that most people think the STL is just about templates, and miss the point, that genetic programming is about presenting algorithms in there most general setting without losing any efficiency.

When you are talking about your malloc implementation, all the data structures you describe are standard structures. I am sure if you wanted to make a different point you could have described it's design in terms of the different algorithms you put together. As you probably know what malloc does very well, you probably skipped the algorithmic step, but I suggest you did start with an algorithm. Perhaps something like "search a collection of memory regions for the first with size greater than the required size" or something similar, depending on whether the goal was performance, or lack of fragmentation etc. You would have then chosen your search algorithm etc. The collection types themselves should be generic parameters to the code, so you can easily benchmark a b-tree compared to a red/black tree, which would could be faster due to memory locality in the nodes.

Analysis: what's an algorithm

OK, I have been thinking about this. In maths we have axioms, lemmas, and theorems.

So here's the categorical view: a data structure is the axiomatic level of operators. For example a product has projections. Actually no, a product IS projections.

At the lemma level we have operations. For a tree a recursive descent is an operation. Its NOT an algorithm because its a trivial combination of the operators.

At theorem level we have algorithms. Inserting a key in a digital trie is not an algorithm, its an operation. Inserting a key in a *balanced* tree is an algorithm.

So, in the context of my compiler, many optimisations are just substitution so that is an operation. (The operators are pattern matching and construction of values by type constructors).

Some optimisations, however, are algorithms, for example parallel assignment (which Felix does for self-tail-recursion optimisation). Choosing the order of the optimisations and when to do them is an algorithm (and a very hard one).

Inlining is an algorithm, and also a very hard one. In Haskell, strictness analysis is an algorithm.

Yesterday, binary chop of an array was an algorithm but today it is just an operation.

So the important point here is that in a categorical view, a data structure is not separate in kind from an algorithm: its just the base level set of operators. There is no data. There are only functions.

Addenda

My point was that to treat all data structures as a list and reduce all algorithms to map and fold is actually making things more complicated.

Map and fold trivially generalize over all (functional, maybe more) data structures.

The idea of focusing on algorithms is more about how to design generic container APIs. You need to categorise all the algorithms by the patterns of data access, so that you can see how they should be categorised.

Your access pattern over a data structure forms an algebra too.

Trivial?

You think its trivial? Then define it. Yes, it can be done. But it took Barry Jay over a decade to figure out how and no one is listening.

And I have serious doubts about the triviality when programmers still don't "get" that fold is not unique unless the argument function is associative. Neither fold_left nor fold_right applied to arbitrary functions and lists are actually fold. How could they possibly be, since they produce different results? And that's just list, almost the simplest possible data structure.

Uh?

You can always fold over the constructors of a data type? That's just a notion, you don't need to put it into math.

If that is what that Barry Jay person you talk about did mathematically, good for him, but the notion in natural language is just one sentence, so I would call it trivial?

What is your definition of fold?

Neither fold_left nor fold_right applied to arbitrary functions and lists are actually fold. How could they possibly be, since they produce different results?

This doesn't make a lot of sense to me. The straightforward definition of fold, as has been recognized for ages, is the right fold; simple folding over the usual algebraic constructors of a list.

But since you can take different views on a list, and a lot of (imperative) list implementation allow for multiple manners of traversing a structure, so (for instance) left fold also naturally arises.

Indeed, if you simply take the natural language definition of fold, "bend (something flexible and relatively flat) over on itself so that one part of it covers another," you quickly start to realize that there are lots of different manners in which you can fold any data structure into different values.

That happens a lot when functional programming, bottom-up you tend to implement multiple manners of folding data structures. It is odd, some of them are more 'primitive' than others, but it might make sense to implement, for instance, a fold which evaluates arguments at even and odd positions differently.

From context, it looks to be

what I'd naturally tend to expect: the reduction of a list (or whatever) to a single element by means of a binary operation using any associative grouping whatsoever (and therefore potentially exploiting any parallel processing capacity that might be available). So that (fold + (list 2 3 4 5)) could be ((2 + 3) + (4 + 5)) or any of four other groupings.

Yah. How to fold a map.

That's tree-like fold over a non-empty list where you exploit that the operator is associative. Good for cluster processing.

But like the map in your car there are countless manners in which you can fold a structure. Where your spouse usually uses the wrong manner.

So I wouldn't say there's one preferred method of defining a fold operator. Following the constructors is one manner, following the properties of the operator is another manner, following the manner in which your data structure is implemented is another, or, as Keaan hints, following the algorithm you're implementing is also good.

Start with proofs.

Your access pattern over a data structure forms an algebra too.

But not a very useful one? What does it give you?

Instead start with sort algorithms, they actually do something useful you might want to do. Look at how each different sort algorithm needs to access the data, does it need random-access, does it need bidirectional-access, does it require more than one pass forward etc.

The API design comes from the algorithms. As Stepanov puts it:

[Starting with the collection API]

It is as if mathematicians would start with axioms. You do not start with axioms - you start with proofs. Only when you have found a bunch of related proofs, can you come up with axioms. You end with axioms. The same thing is true in programming: you have to start with interesting algorithms. Only when you understand them well, can you come up with an interface that will let them work.

Not very much

Your access pattern over a data structure forms an algebra too.

But not a very useful one? What does it give you?

Not very much, I agree. I imagine that, at best, an academic could once write something about how to select the most efficient model given the algebra generated by an access pattern.

That is useful

Actually that is useful, but only if you start with algorithms, and it's not purely academic, I do this in C++ and Rust right now (as does C++ STL).

Given a requirement like reverse this collection, and a collection, trait overloading in Rust can easily pick the best reverse algorithm to use, using forward only iterator if necessary, or a bidirectional iterator if available.

But this is only possible because you categorise the access pattern of the algorithm. My point was you get none of this if you categorise the access pattern of the data structure, or at least you don't get useful categories with which to write algorithms.

map as fold?

What do you mean you can write a map "as a fold"? I have heard this claim before but it is nonsense. You need a fold AND constructors (eg Empty and Cons for a list).

What's more, in general, folds necessarily discard structural information. Consider an arbitrary tree populated with values, how can any of the many possible visitors yielding the values provide enough information to construct an image of the tree?

The simple fact is you can't even do this with a list, there are at least two common folds (left and right).

Map is a highly specialised operation (functorial). Folds are more general and thus much weaker: they discard structure. AFAICS there is a unique fold for any data structure with one type variable, but there is a constraint on the functional argument that it must be order independent (for example addition). Only with this constraint is fold universal .. and that constraint seems to make it rather hard for a fold to perform the same function as a map (though perhaps not impossible if the structure were transmitted as well, eg a list fold with the position in the list as well as the value).

Map as a fold...

What do you mean you can write a map "as a fold"?

Nothing more than:

map_ f = foldr ((:) . f) []


You can write fold as a map too, if you have closures.

It works either way. That was my original point. If you want a fold you can map an accumulating function over a sequence and then query the state it's built. If you want a map, you can fold a constructor over the sequence with an element-transforming filter.

nope

This requires a specific sequential semantics and can expose implementation details that may change. So, you may get a non-deterministic or even a partial fold, but not a reliable left/right fold.

To gather or not.

Greg's answer is more comprehensive, but I would add: a map is a set of independent computations. A fold is a gather operation (also called a reduce), so the computations are not fully independent and can be arranged into O(log n) levels of dependencies.

APL has had fold and map since early 60s

APL has / (reduce or insert or fold) and ¨ (each or map). It also provides \ (scan):

      ×/⍳10
3628800

      ×\⍳10
1 2 6 24 120 720 5040 40320 362880 3628800

      -\⍳10
1 ¯1 2 ¯2 3 ¯3 4 ¯4 5 ¯5

Alternatives to explicit indicies?

I think where we have matrixes and multi-dimensional constructs, explicit indexes are easier to deal with.

What do you think about things like regions in ZPL?

Looks interesting

I think divide and conquer is a good basic implementation strategy. If you could implement a matrix multiply for a 2x2 matrix and then recursively call for each quarter then you would have a naturally cache oblivious algorithms which would be great. You still have the problem of accessing the cells in the 2x2 matrix though.

Regions are the higher dimensional equivalent of 'zip' type functional strategies, where you manipulate the indexes rather than the data. I think you definitely can program in this style, but it seems harder as it is one more step away from the concrete.

I think concrete is easier than abstract, and that the taller an abstraction hierarchy is, the harder it is for people to use.

An example I saw the other day was reversing a list in place. You take the list indexes as a region. You split the region in half, and then zip the two halves together. You then take pairs from the region and swap the values in the cells. To me it seemed a lot simpler to work with two indexes incrementing and decrementing them and swapping the values that way.

PDF lacks Pictures

Anyone else with this problem?

deleted

deleted

Bad link, pointing to LtU.

Bad link, pointing to LtU.

Fixed.

Thanks for pointing that out, I fixed that bad link (it turned out to be a missing "http://").

For some context, see past

For some context, see past posts on the STEPS program.

Ok

A part of the answer to reducing code size could be in fancy high-level DSL-s, but how much can we get from it at all? The idea is that to do something useful with some language, it has to have some level of complexity. And more complexity implies larger code size. Also, The more we want to embrace with a DSL, the more complexity it gets. I think we can get a somewhat smaller code size with DSL-s, but how much?

Some other part of the answer could lie in well designed libraries meant for reuse. Imagine a program for "buy a bread" task. You could end up with thousands rules that drive a robot to a store and back. But a lot of tasks share some amount of rules, so there is a space for reduction. Having a good general rules background, one could write a code for "buy a bread" in ten lines.

And another part of the answer could lie in implicit knowledge. A lot of commands that humans can take, also pull implicit conditional checking and sub-command assertions. That means a lot of if-s with default values that can be overridden could dramatically reduce the amount of code written. Those if-s, written once and used everywhere, could form a very powerful and non-redundant code base.

Inverse problem

Consider we want to write a program to display a shopping list or something. To do this we take a language interpreter written in another language, and a development framework, maybe a test framework, a deployment framework, a library to do maths, a library to do graphics, a library to support some favoured coding style, etc. By the time you have written a single line of code you are touching thousands of lines and importing millions. You see this in JavaScript all the time. I often find it is simpler to write in pure JavaScript, using no frameworks and as few libraries as possible (although you still end up with the inevitable polyfills for compatibility). If people actually learned to write a program instead of just plugging frameworks and libraries together, many tasks could result in less code. Of course many tasks would also require more code, consider trying to incorporate speach recognition into your shopping list. I guess the real solution is to know when a task really is complex and requires a library, and when it would be quicker and more direct to write code from first principles. In my experience you only get this intuition from experience.

great experience in this problem

I wrote about 5m code lines in different project that 95% is dead. All code rewrite every 5 years because corebase is changed (hardware, frameworks, data). Code overhead is 95%/5 years! Each developer spent 95% of their time over 5 years.
What I learn from this lessons:
- People brains like cars or butterfly, that drive to flash light directions using gps.
- Code is no reason. Art also. All is the matrix.
- Live coding only (OSS, no encryption. We must get full control in any moment of any part, programs is buggy, we must fix it for live. Complex multilayer compiler is bad, transparent translation only).
- Languages & compilers is instruments only (no syntax religion).
- All is trees & recursions.
STEPS try to control a recursion. If we look on a picture, then we can found some keypoints like OpenCV. Using that keypoints we can pack the picture to a hash. Imagaine the hash in to a new picture we can pack information in the next step. Computer vision works in that algorithm. This is recursion, that mean in STEPS bricks example.
Another vision will be continue...