User loginNavigation |
archivesany functional language without GC?If you think of it, memory allocation/de-allocation is really just side-effect, I wonder if there is a functional language that makes it explicit. Any advice is appreciated :-) monadic constraint programming?i have been thinking about constraint programming in terms of monads. i have just about come to the conclusion that their models can be completely captured through monadic technology. What follows is an analysis conveyed through a typical example in linear programming. The motivation for the post is to see if anyone else has come up with this analysis or if it is already well known and (hopefully) reported in the literature; or, of course, to spot problems with the formulation. Best wishes, --greg It seems that linear programming (LP) and other kinds of problems factor in an intriguing way. A typical LP problem will be of the form Objective: maximize \Sum a_i * v_i (Note, i'm being somewhat free with syntax in order to get to the essence of the ideas. Apologies to Haskell folks.) Writing this information symbolically, we have ([ (a[i],v[i]) | i <- [0..N], [ ((sumS [ (c[i,j],v[i]) | true ], null), b[j]) | j <- [0..M] ] ], 0, sumT) where sumT :: Expr -> Expr -> |R -> |R is a traced valuation function for reducing expressions in the decision variables to real numbers and sumS :: Expr -> Expr -> Expr -> Expr is also a traced valuation function for reducing expressions to additional constraints Note well: the monadic forms inside the qualifiers of the outer comprehension are intended to *generate* additional qualifiers. The interesting notion is that every constraint problem like this decomposes into a monad together with a *traced* evaluation function, CP = (Monad, Val). In the example above we have Monad = [ (a[i],v[i]) | i <- [0..N], [ ((sumS [ (c[i,j],v[i]) | true ], null), b[j]) | j <- [0..M] ] ] that generates a list of pairs of array access that we want to evaluate -- via the sumT -- to the appropriate product and Val = SumT. A constraint solver eats a pair consisting of the monadic description, together with the sumT function. The effect of running the solver is to drive the monadic generation using the valuation function. This is the moral content of Simplex, for example. An important point to note is that the monadic description never has to be evaluated outside of the context of a particular interpretation of some monad. Its data that has polymorphic interpretations across many potentially applicable monads. Said otherwise, it's a program with many possible (re)uses. The reason to use a traced valuation function (essentially keeping an accumulator) is that this enables composition of models. Clearly, the monadic part has a compositional story via the usual composition of monads. The traced valuation function allows composition of the valuation functions. Thus, if we have (M1,V1) and (M2,V2), we can define (M1,V1)o(M2,V2) = (M1oM2,V1oV2) where V1oV2(x,acc) := V2(x,V1(x,acc)) and M1oM2 is the appropriate monad composition. Note well that these design choices make solving a model essentially like running some form of fold. Again, returning to the example, we actually have (M,0,sumT) where sumT may be considered of type X -> A -> A. Then an expression of the form (solve sumT 0 M) is in exact analogy with (foldl fn seed list) where * sumT takes the role of fn -- which is why it's traced -- and Adopting this approach seems to unify all of the different sorts of constraint programming from LP to IP to FD to SAT to ... but also to line up with a generic query notion. Specifically, the monadic form [ E | C0, ..., CN ] is seems to be just another way of saying select{ E } where { C0, ..., CN }. By L G Meredith at 2006-04-04 20:38 | LtU Forum | login or register to post comments | other blogs | 7967 reads
|
Browse archivesActive forum topics |
Recent comments
23 weeks 17 hours ago
23 weeks 21 hours ago
23 weeks 21 hours ago
45 weeks 2 days ago
49 weeks 4 days ago
51 weeks 1 day ago
51 weeks 1 day ago
1 year 1 week ago
1 year 6 weeks ago
1 year 6 weeks ago