Using coalgebraic structures for traversing ADTs lazily

I am looking for references to the above, especially for implementations in ML-like languages.

Rationale, I needed to write a comparison function for the content of binary trees. Using a coalgebraic structure to decompose the two trees in steps was the only solution I could think of. I am looking for possible generalizations of what I used.

It looks like CPS, but it would be interesting to see it explained better.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

You must be reading my mind.

I'm refining a paper as we speak about this exact question, although not quite the over the complete question: your answer inherently requires some form of laziness, and I'm talking about languages that natively (and sometimes naïvely) implement general laziness, and not ML.

A good place to started with issues of semantics and expression is Lloyd Allison's "Circular Programs and Self-Referential Structures."

His breadth-first traversal example is particularly compelling. It's written in an extended continuation passing style where the tail of cons is considered to be in tail position.

In the direct style, this corresponds to the control operator mapCont in Haskell's Monad Template Library, which you can use to do what you want pretty easily.

A strict language with higher-order functions can macro-express call-by-name evaluation. Add mutation, and you can implement call-by-need laziness. Check out Kent Dybvig's sample implementation of delay and force in TSPL3. Another good starting point is Philip Wadler et al's classic "How to add laziness to a strict language without being odd."

"and sometimes naïvely"

Leon, what do you mean by this?

Also, I'd be very interested in reading your final paper.

Returning pairs of lazy values

Well, returning pairs of lazy values is difficult to implement well. I have unconfirmed suspicions about specific reasons why GHC is weak in this regard.

As for the final paper, it will be available soon, I promise. :-)

It's now available

You can take a detailed look at the solution I outlined above here.

What sort of comparison?

Maybe I'm being thick, but what is the idea behind using coalgebraic structures to represent a binary tree ... is it infinite?

If it's not infinite, maybe you're just interested in the "incremental" feature of coinduction. If so, perhaps what you're really looking for is datatype differentiation whereby you can mechanically derive a cursor type from any regular datatype. With that cursor type, then, you can incrementally traverse a tree and even (without mutation) "update" arbitrary parts of it in constant-time. See Huet's "Zipper" for probably the most famous example of this.

real world applications

I don't have an example of a binary tree that I can publicly state, but here is something that might be interesting to you and the OP:

Sometimes a tree or list models alternating hidden and visible structures, even though you can technically 'see' every structure. The general technique of using a coalgebra to capture noticeable behavior in a program was first described by Reichel in [1].

Suppose you have some problem solving "game" like a Rubik's Cube. There are three kinds of games a machine problem solver might be interested in:

(1) define a problem space
(2) define a particular problem in that space
(3) some mixture of (1) and (2)

If we want to build a generic problem solver for a Rubik's Cube, then it must be capable of handling all three.

The pure problem space approach is the most general: specified functions for states and operators, but unspecified initial state and goal criterion.

Defining a specific problem within the problem space specifies the initial state and goal criterion.

In a Rubik's Cube, the goal criterion is always the same. Therefore, it is okay to include the goal criterion in the problem space definition.

Now, given this Rubik's Cube, can we "learn" a general technique for solving Rubik's Cube games fairly efficiently? Well, believe it or not, Carnegie Melon awarded a Ph.D. to Richard Korf under Herb Simon in the 1980s for a technique to solve Rubik's Cube-like games. So, we know it is both possible and that computer scientists find Rubik's Cubes a hacker problem. :)

What makes a Rubik's Cube a sexy problem is its resistance to traditional problem solving methods that attempt to "learn" a general strategy. The immunity stems from the fact that all known algorithms for transforming a Rubik's Cube from its initial state to its goal state require previously satisfied subgoals to be violated later in the solution path. To overcome these non-serializable subgoals, what Korf had to do was define a new problem structure type: operator decomposability.

In effect, what we really want the machine to learn is a macro operator that specifies how to move two steps forward toward the goal criterion, while concealing some iterative mess of one step backward, one step forward sequences. We want the macro operator to represent any intermediary steps that violate previously satisfied subgoals.

In problem solvers, a tree or list is used to represent the alternating states and operators performed from an initial state to a goal state. Allowing a macro operator effectively suggests "nested" (invisible) alternating states and operators. All the problem solver records is

macro_operator : state -> state'

However, you still want the problem solver's dependency model to provide you with a trace of rule executions. Within the system, the macro operator must be self-describing. One way to implement this would be to allow the macro operator to internally memoize its rule executions to a stream.

This is meant to be a fun way to answer your question, although I have more practical, real world examples.

[1] H. Reichel. Third hungarian computer science conference. In Behavioural equivalence — a unifying concept for initial and final specifications. Akademiai Kiado, Budapest, 1981.

Comparing content


Maybe I'm being thick, but what is the idea behind using coalgebraic structures to represent a binary tree ... is it infinite?

No, I forgot to mention that the content of the tree is ordered. I wanted comparison on content (a tree 1-(2-3) equals a tree (1-2)-3)). I failed to find a good recursive definition. Now, in a lazy language it wouldn't really be a problem. You could lazily describe the content with a list, then compare the two lists.

The good thing about the latter approach is that it doesn't evaluate more than the nodes needed.

I wanted the same, so I build some combinators that lazily transforms a tree into a generator on the content (a coalgebraic list), such that I can compare the content of the two trees [also lazily]. I liked the solution, it was tricky though to get the same running cost as in a lazy language.

[The type of the lazy colist in ML would be something like coL a = unit -> (a * coL a) opt ] [Hidden subliminal ad on purpose ;-)]

In the lenient-evaluation

In the lenient-evaluation language I'm developing, coinductive types are pretty easy to construct with a special constructor 'field <= seed'.

define fibs_field = {first=\r x->(x.0), rest=\r x-> r <=(x.1,(x.0+x.1))} 
define fibs = (fibs_field <= (0,1))

fibs.first = 0
fibs.rest.first = 1
fibs.rest.rest.first = 1
fibs.rest.rest.rest.first = 2

But, unlike in lazy languages, each call to 'first.rest' will potentially repeat the computation that eventually results in the data 'fibs_field <= (1,1)'.

I'm certain this could be used to transform the data structures into streams with explicit termination, but ensuring it is incremental (if not using laziness annotations... which aren't guaranteed as more than suggestions in this lenient language) would probably require a bit more work than I'm willing to show here.

[Edit: Loosely, an incremental valuation would involve explicitly carrying around a 'stack' of subtrees that you have yet to reach from the right side of each node. The first construction for the stream would be O(lg N) for a balanced tree, and each step thereafter would be an amortized O(1) if you're simply walking the tree.]

Do you mean ...?


define fibs_field = \x -> {first=x.0, rest= fibs_field <=(x.1,(x.0+x.1))} 
define fibs = (fibs_field <= (0,1))

I assume not, but I ask because the above is what would make sense to me. Would you mind explaining how, in your language, your call <= passes through the apparent structure and how 'r' gets bound?

A 'field' is a record of

A 'field' is a record of functions each with two arguments (or more if the element is to return a function), the first argument of which is later replaced with the field, and the latter argument of which is replaced with the seed, upon request to any particular property of the coinductive value constructed as (field <= seed).

So, in this case, 'r' is getting replaced with 'fibs_field' at each step, and 'x' with the current seed (which was (0,1)). Example calls:

fibs.rest = (fibs_field <= (0,1)).rest
          = (fibs_field.rest fibs_field (0,1))
          = (\r x -> (r < (x.1,x.0 + x.1))) fibs_field (0,1)
          = (fibs_field <= ((0,1).1,(0,1).0 + (0,1).1))
          = (fibs_field <= (1,1))
fibs.rest.rest = ... = (fibs_field <= (1,2))
fibs.rest.rest.rest = ... = (fibs_field <= (2,3))
etc.

It's useful to think of <= as a constructor for data, rather than as a function or operator. Modulo partial-evaluation optimizations, nothing is bound to 'r' or 'x' until you actually ask for a named component of the resulting constructed value.

(I happen to use =< for less than or equal to... the arrows all have other applications.)

fibs_field = \x -> {first=x.0, rest= fibs_field <=(x.1,(x.0+x.1))}

This formulation would have fibs_field be a function that returns a record with 'field <= seed' being a call-by-need application of the field to the seed. I initially considered an approach much like this one, but ended up favoring the field-is-a-record approach in order to support programmatic manipulation of fields and algebras.

Interesting

Did you consider separating this into orthogonal mechanisms for "evaluate this lazily" and "apply function to all fields in a record" (edit: oops, I actually mean "apply record of functions to this parameter")? For example, letting 'lazy f x y ..' be a lazy cell evaluating 'f x y ..', and letting records behave as functions ('{f=a, g=b} x' => '{f=a x, g=b x}'), it looks like you could recover your notion of fields:

fibs_field = {first=\r x -> x.0, rest= \r x -> r <=(x.1,(x.0+x.1))}
where f <= x = let r = lazy f r x in r

No, I never considered doing

No, I never considered doing that. I had rejected programmer control over laziness at an earlier stage in the development of the programming language, so when I later approached the issue of coinductive data I was already looking for a solution that wouldn't rely on programmer-specified lazy valuation.

And if you're curious about my rejection of laziness, there are several reasons. First, the language is for open distributed programming, and laziness (especially the proposed algorithmic benefits) is rather difficult to reason about in the face of distributed communications. Second, I wish to give the compiler greater control over implementation and programmers greater control over specification, so programmer control over an implementation-detail like laziness runs contrary to this principle. Programmers may provide laziness annotations (~(1+1)) only as suggestions, not commandments.

Besides, if you have full laziness or programmer control over laziness, then there is no need to make a distinction between inductive and coinductive data. The two converge.

Coinductive definitions

Well, I'm _guessing_ from your notation that you want an in-order traversal. This is expressible in the same extended CPS that Allison uses, except you can avoid the fancy knot-tying that Allison does.

And yes, implemented correctly, that style of expression has very real, very practical benefits in a lot of common cases, including inorder traversals.

Your type for colist is in the right ballpark, but you are still thinking in terms of too many side effects. Jacobs and Rutten's tutorial on (co)algebras and (co)induction has a rather nice section on coinductive definitions which leads rather naturally to the notion of a carrier set. Check out section 3 that starts at the very end of page 8. That should be a rather intuitive syntax for functional programmers, it's too bad that nobody's actually implemented that syntax yet. (That I'm aware of)

More programming/solution oriented

I was looking for [ML pearls/design patterns] something like:

If the following is the ADT for a tree

tree a = leaf | branch a (tree a) (tree a)

then this gives rise to a lazy coinductive type [and a translation function]

cotree a = unit -> (coleaf | cobranch a (cotree a) (cotree a))

From which you can derive a lazy colist describing the content.

Maybe the link with coalgebras is a bit of a stretch though.

[I thought B. Jacobs had a coalgebraic language?]

Translation to Laziness

What you're achieving is a general transform towards lazy valuations, as opposed to coalgebras (which would be better described as a set of destructors).

Still, once you are using lazy valuations the distinction between data and codata diminishes greatly. Whereas coinductive structures are infinite and inductive structures are finite, lazy structures essentially interleave the two and may go either way.

Hmm...

Well, the link to coalgebras is good, but I was wrong. Your type is a perfectly acceptable carrier set for the coalgebra.

You would like the paper on delimited continuations I posted below. The solutions contained therein are almost exactly what you asked for.

An alternative approach

I don't know if you're familiar with Huet's zipper, but in the absence of coroutines or lazy evaluation, a zipper approach can yield a similarly efficient solution to the given problem. This should not be surprising, as Oleg has pointed out that "a Zipper can be viewed as a delimited continuation reified as a data structure." Of course, a zipper is not especially coalgebraic, but simply a transformation of a regular type (in a sense, its derivative.)

Yeah

But I thought it was too difficult a solution for the simple problem at hand.

Another solution

I just found this gem. It differs from my proposed solution, but it's more directly applicable to strict languages:

On the Dynamic Extent of Delimited Continuations by Dariusz Biernacki and Olivier Danvy.

[edit: there is an extended version of this paper with Chung-Chieh Shan added as an author, available here]

[edit: sigh... there is yet another further revision of this paper available as On the Static and Dynamic Extents of Delimited Continuations, also published (with revisions?) in Software Practice and Experience]

Overthinking the problem?

I may be being stupid, but could this paper:http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.48.1134
be what you're looking for? It's describing how to implement set union, intersection, difference, etc., but the techniques are widely applicable. And no advanced types are needed...

Familiar

I scanned the paper. I am familiar with most techniques, but find it hard to see how they would be applied to the comparison function I needed.

Maybe someone could show a solution given their techniques? The problem is: given two ordered binary trees, give an (optimal) comparison function for the lexicographic ordering of their content. For example, compare ((1,2),2) (1,(2,2)) = 0, compare (1) (1,1) = -1, and compare (4) (3) = 1.

You're definitely overthinking things

As am I.

You're not going to be able to do better than O(N) with early exiting. So I would be very inclined to do the following: create an iterator or a lazy list (depending upon the language and/or the programmer) which scans each tree in sequential order- this can be done in O(N) with O(log N) overhead. Once the trees are flattened, do a iterative lexicographic comparison of the two sequences/lazy lists. This approach is going to be real hard to beat- not the least because it's simple.

Some example code in Ocaml:


type 'a tree = Leaf | Branch of 'a tree * 'a * 'a tree;;

module Iterator : sig
    type 'a iterator;;
    val make : 'a tree -> 'a iterator;;
    val curr : 'a iterator -> 'a option;;
    val next : 'a iterator -> 'a iterator;;
end = struct

    type 'a iterator = 'a tree list;;

    let push_left itr = function
        | Leaf -> itr
        | Node (l, _, _) as x -> push_left (x :: itr) l
    ;;

    let make tree = push_left [] tree;;

    let curr = function
        | [] -> None
        | (Node (_, x, _) :: _) -> Just x
        | Leaf :: _ -> assert false
    ;;

    let next = function
        | [] -> []
        | (Node (_, _, r) :: t) -> push_left t r
        | Leaf :: _ -> assert false
    ;;

end;;

let compare tree1 tree2 =
    let rec loop itr1 itr2 =
        match ((Iterator.curr itr1), (Iterator.curr itr2)) with
            | None, None -> 0
            | Some _, None -> 1
            | None, Some _ -> -1
            | Some x, Some y ->
                if x  y then
                    1
                else
                    loop (Iterator.next itr1) (Iterator.next itr2)
    in
    let itr1 = Iterator.make tree1 in
    let itr2 = Iterator.make tree2 in
    loop itr1 itr2
;;

There are some optimizations you could make to that code (not having to allocate two Options on every iteration being an obvious one), but that gives you the basic idea. Other languages, other minor variations- in Haskell I'd use a lazy list rather than an iterator. Same difference.

Same solution?

It looks almost exactly the same as the approach I took. The main difference being that the iterator/zipper pushes nodes in an explicit data structure, whereas in my approach they are pushed in a lazy colist function. My solution is a bit shorter I think.

(I think there is a small mistake in your solution, you're missing a case on x < y and y < x in the comparison, but that may be HTML formatting.)

Not quite.

No one is overthinking anything here. The paper contained in Brian's original response does not apply to the problem being discussed.

Also, Brian's solution is essentially the same as one of the solutions contained in Biernacki and Danvy's paper. That paper also contains Marco's (as I understand his solution), and the two programs are related through defunctionalization and refunctionalization. Marco's solution can be viewed as an eager solution that constructs the entire fringe before proceeding with the comparison, except it introduces thunks to avoid unnecessary construction. Alternatively, these thunks can be viewed as continuations.

As to which solution performs the best, that's rather dependent upon the particulars of a given language implementation and CPU architecture. Continuations often have a reputation for being "slow", because most Scheme implementations don't implement call/cc efficiently. But call/cc is not inherently inefficient, and there are continuations other than those provided by call/cc.

A continuation based implementation might contain more indirect jumps, but they also might make better use of native stacks to reduce heap allocation.

[edit: I resolved the ambiguity with regard to which paper I was referring to, in addition to other more minor clarifications.]

Solution

I thought I should post my solution too:

import "system.hi"

namespace colist (

    using system
    using opt

    type colist = \a => unit -> opt (a, colist a)

    def conil: colist a = 
        [ _ -> opt.nothing ]

    def cocons: a -> colist a -> colist a =
        [ x, xx -> [ _ -> opt.just (x, xx) ] ]

    def cocat: colist a -> colist a -> colist a =
        [ xx, yy ->
            [ u ->
                v = xx u;
                if opt.nothing_test v then yy u
                else 
                    (x, xx) = opt.value v;
                    opt.just (x, cocat xx yy) ] ]

   def cocompare: ::Ord a => colist a -> colist a -> int =
        [ xx, yy ->
            v0 = xx nop;
            v1 = yy nop;
            if opt.nothing_test v0 then
                if opt.nothing_test v1 then 0 else 1
            else if opt.nothing_test v1 then (0 - 1)
            else
                (x, xx) = opt.value v0;
                (y, yy) = opt.value v1;
                if x < y then (0 - 1)
                else if y > x then 1
                else cocompare xx yy ]

)

using system
using colist

type tree = \a => [ leaf | branch (tree a) a (tree a) ]

def tree_to_colist: tree a -> colist a =
    [ leaf -> conil
    | branch l v r ->
        [ u ->
            cocat (tree_to_colist l) 
               (cocons v (tree_to_colist r)) u ] ]

def cmp: tree int -> tree int -> int =
    [ t0, t1 -> 
        ct0 = tree_to_colist t0;
        ct1 = tree_to_colist t1;
        cocompare ct0 ct1 ]

I am still wrestling with my compiler, so there may be some small errors.

I agree with most remarks of Leon. It's just an O(n) comparison this way. Note that it totally mimicks lazy evaluation by deferring the evalution until "the button is pushed" or "an observation is made" by applying the thunk to the value "nop".

[I didn't see such a solution in the paper.]

[Another small advantage of this construct over the iterator construction is that you can model infinite structures.]

[Maybe it should have been based on the real coalgebraic type c: L t -> 1 + (t * L t), but, yeah, this works.]

Well...

I admit that the paper elides the final step of lexicographically comparing the two colists, but modulo the difference in tree definitions and a few other quibbles, how is your tree_to_colist substantially different from visit in Section 4.1.2 on page 10? (Page 14 if you go by my PDF reader...)

In the eye of the beholder?

Well, it behaves almost the same, so it is the same, I guess. Though the type I use is a function, and the other type is wrapped in a datatype.

I guess, if anything, my example, or both examples, just shows that a generator can be understood to be an instance of a coalgebraic type. I like my example a bit more for clarity, but the visit example is certainly shorter.

I think this boils down to taste, if we go out looking for (counter-)evidence, I think we'll both find it.

[I missed that particular solution, I was looking for references to coalgebraic types, and was referring to the other paper. ]

[And I guess that exposing the coalgebraic nature opens a mental door to other coalgebraic uses.]

[I guess a substantial advantage of using a function over a datatype is that any evaluation is only triggered when observing.]

Evaluate by Need

I thought it over, as you can see at the post above. ;-) The gist: yes, they are almost equivalent.

But...

If you want to truly emulate lazy evaluation, which you need in this case, it is better to wrap the coalgebraic type in a function than a datatype, since you can then more easily express call-by-need evaluation.

A slight shift in view, maybe, but I think it is important and makes a generator/coalgebraic type substantially more usable.

[Ok, sometimes they seem to do this in the article. Still think my solution is way clearer.]

Nice

Well, upon reviewing my post(s), I understand the source of confusion, as I have linked to five things so far in this discussion. I edited my post to clarify that I was referring to Biernacki and Danvy's paper.

You use datatypes too. Your colist type is isomorphic to sequence computation as defined in structure Lazy_generator in Section 4.1.2. One difference is that your tree_to_colist returns a sequence computation while visit returns a sequence. This recalls the distinction between even and odd streams in Wadler et al's paper, although maybe that distinction doesn't really apply to the traversals themselves.

Except for the fact that your use of cocat leads to inefficiency, you don't really need call-by-need evaluation if all you want to do is compare the two colists. Simple call-by-name will probably perform slightly better in this particular use case.

However, as you are already using continuations, with a little tweaking of your type, it's easy to implement a constant-time concatination function.

Although I can't precisely nail down the connection between coalgebras and continuations yet, I've been throughly convinced from the beginning, (a little over 8 years ago, in my case) that there is a very deep connection between the two.

Which is funny, as I didn't understand continuations or coalgebras very well then. While I certainly understand both much better these days, I'm not sure I really grok them. :-)

If you were coming at this

If you were coming at this from an object-oriented perspective, you'd probably think continuations had a deep connection to algebras...

Interesting

I am interested in hearing you elaborate on this statement, but I think it would be better to move it to another thread. I do believe there is a connection between continuations and open recursion; perhaps that thread would be relevant?

(As I started learning FP about 12 years ago, I kinda stopped thinking about objects, so...)

Thanks

Yeah, well. We use labels to convey ideas. I am not very fond of reusing labels between different settings, so I guess I failed myself here. ;-)

Though, it seems, again, that the connection between generators/continuations and coalgebras is as superficial, or as deep, as the connection between monads in Haskell and the categorical interpretation. Although, I think most people understand continuations as 'suspending a calculation' and coalgebras as 'possible infinite structures'.

It helps, a bit, in this case to understand it as a coalgebra since it becomes rather trivial to encode/reuse mathematical comonadic functions on comonadic data as operational constructs, as shown. [cofilter, cozip, coetc...] I've seen a lot of people referring to the functor as an operational 'step'-function. After that, operationally, I guess generator, or even recursive continuation, is a better name than coalgebraic list.

A some point, I guess if it turns out to be a worthwhile idea, it might be worthwhile to introduce a codata type with approximately the described semantics in my language.

The recursive type from Danvy et al. and the one I use are approximately bisimilar. We both saw that. It's just a matter whether it is more natural to use the product of sums, or the sum of products, as the base of expressing the definitions on the structures. I like the latter approach and gladly take some performance hit at the benefit of expressivity here. (I don't fully understand your comment on constant time here - yes, it can be expressed more direct, but that will only buy a constant factor, and probably can be compiled away anyway.) [Ah! I see it.]

But, I admit my interest in coalgebras goes as far as they have a concrete use. (I like math, but I don't like category theory. I'll read it, but compared to other math, to me it seems mostly a lot of air. There are some ideas I like, for the rest it is definitely not my thing, or I think they just got it plain wrong. At least, that's what I thought eight years ago, and I forgot most I knew then. ;-) )

To each it's own. And for the rest, I'll gladly admit to anyone I just don't grok anything and mostly produce air myself. ;-)

Now that you've shown me yours...

...I'll show you mine. As soon as I can fix a weird LaTeX error. :-)

[And I wouldn't fret too much that your solution has already been published, after all, look at the names on the paper!]

O(n)

I think we've arrived at a copasetic point in this discussion, but there is one last point I cannot let stand. Namely, I'm pretty confident that Brian's solution is O(n), however, I don't think your solution is, Marco.

Now, I don't know what your compiler looks like, and it's possible you are doing something unusual and possibly rather interesting to optimize cocat. However, I'm also naturally somewhat skeptical of optimizations that go beyond constant-factor performance improvements. (At least with respect to time, maybe not so much with respect to space)

However, if you translate this code directly to a compiler I'm familiar with, like GHC, Chez, SML/NJ, or Moscow ML/OCAML, I'm pretty sure your traversal will not operate in linear time on most trees, due to the naive use of list concatination.

Could you run some empirical timing tests and get back to me?

Nope

I'm in the middle of debugging. The whole enchalada is broken at the moment so running some tests will take time. And you are right about the order. Which puzzles me too, since it is pretty similar to what I think one would do in Haskell.

[Killed of some nonsense: I think the complexity of my solution is the same as the complexity of the naive flattening of a tree in Haskell. Anyone here who knows that complexity?]

[The bad thing is that I ran this example through my stage0 compiler, which tranlates to a GRS in ML, which might give different performance than the result of the stage1 bootstrap compiler which translates to C. But I don't have any cycles left to spare to think that over.]

God, now I am interested

Someone feels like getting the complexity right for this case? O(n*n)?

Wrapping it up: Samefringe Problem

Found out I defined a rather well known problem.

Found references that flatten in Haskell is either O(n^2) or O(n*log(n)).

The standard manner of getting good performance in Haskell is to use an accumulator to get O(n) on flatten, but I think that cannot be used here.

Master Theorem

I might get it totally wrong but I guess the recurrence relation for a binary tree would be T(n) = 2 T (n/2) + n/2. So, case (2) of the master theorem: O(n log(n)).

For the rest, sorry for boring you.

Re: Master Theorem

Haha, not a problem. Thanks for citing the Master Theorem, as that's a name everybody here should know!

The complexity depends on the shape of the tree. Your analysis should be correct in the case of balanced trees, however, your algorithm can be O(n) if the tree is tilts enough to the right, or O(n^2) if your tree tilts enough to the left. So it all depends.

Yeah, saw that

I agree, O(n^2) worst case if all values are always in the left branch, which will not happen very fast. In my case, where I use trivially size balanced maps, I am not sure how it will work out.