Global type inference for an OO trait-based language [early idea]

In trying to bring my YinYang work to a textual programming language, I'm beginning to design a new type system based on mixin-like traits (as in YinYang, no classes) but with global type inference (also, to support the inference described in the before-mentioned paper). As we know, global type inference through Hindley Milner in OO languages is problematic given support for subtyping. So a language like Scala is limited to local type inference.

I'm developing my system based adding the concept of an "at most" constraint as the opposite of the typical "at least" constraint. Rather than have type variables that are bound, their "at most" constraint starts as an open type that is gradually narrowed via unification. Consider some Scala/C# like code:

trait Dictionary[K,E] { 
  def set(key : K, elem : E) = {...}
  def get(key : K) : E = {...}
def foo(s : string) = {...}

def bar(dict) = {
  dict.set(key = 42, elem = "hello");
  foo(s = dict.get(0));

The bar method lists dict as an argument without specifying a type, so a type parameter is implicitly created for it (we could have written def bar[T](dict : T)). The second-last statement (calling "set") propagates a constraint on dict that (a) dict is at least a Dictionary (as in YinYang, methods are selected directly rather than by name), K is at most int while E is at most string. The last statement (calling "foo") propagates a constraint on dict that E is at least string (so E is string since its at least and at most a string). So for any def "p" with argument "a" that is called on expression "e" (so "p(a = e)") we have:

typeof(e) <: typeof(a)
typeof(a) :> typeof(e) 

Where <: is "at least," which are combined by union, and :> is "at most," which are combined by intersection. A type error occurs on a type parameter if "at least" is not a subset of "at most." An actual type is just a set of traits where extension means subset (so if C : B and B : A, then typeof(C) is (A, B, C)). Type variables are never closed until they go out of scope (in which case, we can just take "at least" to be its type), and so we have to limit the type system in some way so we can type check in a modular way, which I haven't really explored yet.

YinYang-style inference comes into play when we start treating traits like type variables that can be constrained. Also, exclusive extension relationships (which prevents an int from being mixed in with a Button, for example) continue to temper the flexibility of compositions so the system isn't completely open-ended.

So I'm trying to figure out why I think this is workable to support global type inference when Scala cannot support it with similar mechanisms. Any ideas? Perhaps its because I'm not relying on name binding (names are resolved to symbols by the IDE)?