DPROG
started 4/8/2003; 4:04:15 AM - last post 4/20/2003; 10:39:54 PM
|
|
andrew cooke - DPROG
4/8/2003; 4:04:15 AM (reads: 4112, responses: 20)
|
|
DPROG |
DPROG is a domain specific language for specifying dynamic programming algorithms; given a recursive definition of the problem, the compiler generates code for solving the problem using dynamic programming.
via sweetcode
Posted to DSL by andrew cooke on 4/8/03; 4:08:03 AM
|
|
|
|
Ehud Lamm - Re: DPROG
4/8/2003; 5:39:08 AM (reads: 2986, responses: 0)
|
|
But basically you need to state how the dynamic algorithm works, right? It would be much more exciting if this was inferred...
|
|
Luke Gorrie - Re: DPROG
4/8/2003; 8:14:47 AM (reads: 2959, responses: 1)
|
|
Can someone describe how it differs from the Lispy "memoize" approach to dynamic programming?
I think that this short paper by Peter Norvig is a very nice demonstration of dynamic programming via memoization in Lisp: http://acl.ldc.upenn.edu/J/J91/J91-1004.pdf
|
|
Ehud Lamm - Re: DPROG
4/8/2003; 8:31:36 AM (reads: 3025, responses: 0)
|
|
And recall the cool work on automatic memoization that was mentioned a couple of times on LtU (by Bob Harper?).
|
|
Darius Bacon - Re: DPROG
4/8/2003; 1:06:08 PM (reads: 2903, responses: 0)
|
|
I haven't looked at the implementation, but I'd expect it evaluates things bottom-up, instead of top-down with "have I solved this subproblem yet?" flags (the memoizing approach). This is supported by the section on well-founded recurrences.
Here's a really basic subset I just hacked up in (not-quite-portable) Common Lisp:
(defmacro define-recurrence (name &body equations)
(let ((LHSs (mapcar #'first equations))
(RHSs (mapcar #'third equations)))
(let ((n (car (last LHSs))))
(expand-recurrence name n LHSs RHSs))))
(defun expand-recurrence (name n LHSs RHSs)
(labels ((expand-equation (lhs rhs)
`(,(expand-lhs lhs) ,(expand-rhs rhs)))
(expand-lhs (lhs)
`(= ,n ,lhs))
(expand-rhs (rhs)
(cond ((atom rhs) rhs)
((eq (first rhs) name)
`(aref ,@(mapcar #'expand-rhs rhs)))
(t (mapcar #'expand-rhs rhs)))))
`(defun ,name (,n)
(let ((,name (make-array (list (+ ,n 1)))))
(dotimes (,n (+ ,n 1)) ;FIXME: may assign instead of bind
(setf (aref ,name ,n)
(cond ,@(mapcar #'expand-equation LHSs RHSs))))
(aref ,name ,n)))))
(define-recurrence fib
(0 -> 0)
(1 -> 1)
(n -> (+ (fib (- n 1))
(fib (- n 2)))))
|
|
Darius Bacon - Re: DPROG
4/8/2003; 2:42:55 PM (reads: 2899, responses: 0)
|
|
D'oh! In the shower this improvement occurred to me:
(defmacro defun-recurrence (name (n) &body body)
`(defun ,name (,n)
(let ((,name (make-array (+ ,n 1))))
(loop for ,n from 0 to ,n
do (setf (aref ,name ,n)
(flet ((,name (,n) (aref ,name ,n)))
,@body)))
(aref ,name ,n))))
(defun-recurrence fib (n)
(if (< n 2)
n
(+ (fib (- n 1))
(fib (- n 2)))))
So if we generalized that to multidimensional arrays, would DPROG be doing anything more? I'm not sure.
|
|
Luke Gorrie - Re: DPROG
4/8/2003; 3:20:02 PM (reads: 2876, responses: 0)
|
|
Bloody hell, nice program!
|
|
|
Here I thought I'd made my peace with having to use C++/Java all day, and then you go and pull a lisp stunt like that on me. ugh :(
|
|
Darius Bacon - Re: DPROG
4/8/2003; 8:29:57 PM (reads: 2859, responses: 1)
|
|
Thanks, Luke. :) I wish I could use Lisp more in paid work, too...
|
|
Ehud Lamm - Re: DPROG
4/8/2003; 11:40:30 PM (reads: 2895, responses: 0)
|
|
The LtU sitpend program is going to be announced shortly...
|
|
Luke Gorrie - Re: DPROG
4/9/2003; 8:35:54 AM (reads: 2770, responses: 0)
|
|
Next question: why is bottom-up better than top-down? Is it just to avoid memoization's "have we already computed this" table entry lookup? What about algorithms where to compute f(N) you only ever need to use a subset of values from f(0)..f(N-1)?
|
|
Darius Bacon - Re: DPROG
4/10/2003; 7:54:48 PM (reads: 2640, responses: 0)
|
|
I think that's the main reason. You also avoid the stack space from
the recursion, and have a different form of the program that may be
open to other optimizations.
At http://www.cs.sunysb.edu/~liu/ there's a paper "Dynamic programming
via static incrementalization" about analyzing/transforming code in
ordinary languages in an intelligent way -- I only skimmed the first
few pages but it looks like it's closer to Ehud's idea of inferring
how the DP algorithm should work...
And at http://accesscom.com/~darius/hacks/screenfuls/screen3.html is a
fancier version of the Lisp above. It can handle all the examples
from the DPROG tutorial -- e.g.
M[n,k] = select{A[0] when n=1,
sum{ A[i] where 0 <= i < n } when k=1,
min{max{M[i,k-1], sum{ A[j] where i <= j < n } }
where 1 <= i <= n } }
where 1 <= n <= N and 1 <= k <= K
becomes
(defun-recurrence M (n k) (1 1)
(cond ((= n 1) (A 0))
((= k 1) (loop for i from 0 below n
sum (A i)))
(t (loop for i from 1 to n
minimize (max (M i (- k 1))
(loop for j from i below n
sum (A j)))))))
It's so much easier to extend the right language than create a new
one! Of course, if you start getting into things like fancy analyses
to detect which subset of the values you'll need, it may be just as
much work in the end, but at least you got something usable quickly.
If I wanted to use this for an algorithm like FIB that only needs the
last few values, not a table of all values, I'd modify the TABULATE
macro in the code linked above. (Basically turn (AREF a i) into (AREF
a (MOD i 2)).) You could make an open implementation with hooks for
different changes, like hand-rolled AOP -- I won't bother because then
the code would no longer fit on one screen. :-)
Finally, it's possible to do this with a C++ library after all -- code
using it would look like
inline int compute_fib(Recurrer<int, int>& fib, int n) {
return n < 2 ? n : fib(n-1) + fib(n-2);
}
int fib(int n) {
return recur(compute_fib, n);
}
|
|
Darius Bacon - Re: DPROG
4/10/2003; 8:08:34 PM (reads: 2626, responses: 0)
|
|
Oh, and that stipend would sure come in handy right now. heh.
|
|
Thomas Mailund - Re: DPROG
4/11/2003; 3:44:21 AM (reads: 2655, responses: 0)
|
|
Hi guys. I'm new here, so excuse me if I screw up and replay in some other thread than intended or anything like that.
I see that you are discussing DPROG and I would like to respond to a few comments on it. First of all, the memorization versus dynamic programming choice: The main reason for using dynamic programming and not memorization for my purposes is that I need to extend the dynamic programming algorithms generated with garbage collcection, so to speak, of the tables filled in during the algorithm. The data sets I work with are too large to allocate N*N or N^3 matrices, so I only want to allocate and use the columns and rows that I strictly need for the algorithm (and then use Hirschbergs algorithm to re-generate paths through the matrices when I need them, or just the the final value when I only need that). With dynamic programming this is simple to achieve, with memorization I wouldn't know how to approach it.
This leads me to the second comment: Why write a new compiler instead of using lisp macros or C++ templates? The main reason for writing DPROG is to get the compiler to automatically infer the number of columns and/or rows needed at any given time to complete the algorithm, and provide hooks to Hirschberg's algorithm to simplify using it. Calculating dependencies between matrices and matrix-cells is what I am currently working on. At least, it is what I was working on at the start of this week and is what I will be working on when I return from the conference I am currently attending. This dependency calculation is an important part of the compiler, and will be added before moving from alpha to beta.
Using a library in another programming language can be just as easy as expressing the recurrence in DPROG, at least for simpler recurrences, but using the DPROG compiler you can (hopefully) check that the recurrence is well-founded and infer which elements in the algorithm you need to remember and which you can delete as the algorithm progresses.
Yours,
/mailund
|
|
|
The data sets I work with are too large to allocate N*N or N^3 matrices, so I only want to allocate and use the columns and rows that I strictly need for the algorithm
Sparse arrays are commonly used for this purpose.
With dynamic programming this is simple to achieve, with memoization I wouldn't know how to approach it.
You could use sparse arrays with a solution like the one in Darius Bacon's second post in this thread.
BTW, I'm not arguing against the use of a DSL. However, it seems to me that an embedded DSL could be more useful than a standalone one in a case like this. If this were embedded in Scheme or Lisp, you could easily analyze recurrence definitions. And it would have saved you a lot of work! ;)
P.S. I notice the Norvig paper referenced above uses hash tables, which address the sparseness issue.
|
|
Thomas Mailund - Re: DPROG
4/12/2003; 2:56:07 AM (reads: 2545, responses: 1)
|
|
The data sets I work with are too large to allocate N*N or N^3 matrices, so I only want to allocate and use the columns and rows that I strictly need for the algorithm
Sparse arrays are commonly used for this purpose.
I haven't had time to read the Norvig paper, so what I am about to say might be way off...
As I understand the suggestion of using sparce arrays, you are saying that I can reduce memory usage by storing only the entries in the arrays that I need. That is true, but what if I really need all entries, just not at the same time in the algorithm? Typically, I need to fill in a matrix one row or one column at a time, and when filling a row/column I only refer to the current row/column or the previous row/column. All rows/columns further back are no longer needed, so I want to re-use the memory in those rows/columns.
If, for instance, I fill in the matrix one row at a time, only refering back a single row, I can adress into the matrix as M[i,j] == M + N*(i mod 2) + j (where N is the rowlength).
The problem with memorization (as I understand the sparse arrays to be) is that it saves me from using memory for cells that will never be used, but it does not allow me to recognize when a cell is no longer of use and to re-claim the memory for another cell.
In the applications I am interested in, I typically need all cells but not at the same time, so memorization is not the right choice.
BTW, I'm not arguing against the use of a DSL. However, it seems to me that an embedded DSL could be more useful than a standalone one in a case like this. If this were embedded in Scheme or Lisp, you could easily analyze recurrence definitions. And it would have saved you a lot of work! ;)
I absolutely agree that embedding the DSL in an existing language is better than what I do with DPROG, unfortunatly I need to embed it in C++ and I couldn't think of a way to do this easily--especially if I wanted the analysis of the program to automatically determine the order to fill in cells and to reduce memory usage by reclaiming memory for storing cells.
You are probably right about it being easier in Lisp. My lisp experience is limited to hacking a bit of elisp and playing around with guile a few years ago, so I wouldn't know how to approach it, though.
But as I see it, the only thing I would really save in effort by programming this in lisp is the parsing and code generation (since the input program can be read immidiatly as lists and since the output is a simple matter of expanding macros--to a certain extend, anyway), but the analysis of the program would still need to be done. Maybe it could be done more elegantly in lisp than C++, but for me it would have the additional overhead of having to familiarise myself with a new language.
/mailund
|
|
Darius Bacon - Re: DPROG
4/12/2003; 11:11:01 PM (reads: 2508, responses: 0)
|
|
Welcome to LtU, Thomas, and thanks for creating DPROG -- I had fun playing with this, and it ought to be even cooler with the dependency analyses when you've got them.
Just to continue the kibbitzing, if I were extending my version with the array representation optimizations you mention, I think I'd use an open implementation with optional representation hints, as I mentioned above for the FIB function, and worry about analyses to produce the hints automatically (and statically ensure correctness) as the very last step, that might not turn out to be needed in practice. But I don't intend to do the work, and don't know anything about the algorithms you're concerned with, so this opinion is only a wild guess.
|
|
Luke Gorrie - Re: DPROG
4/14/2003; 6:53:02 AM (reads: 2457, responses: 0)
|
|
Quick note: Thomas, you're right about the memoization in Norvig's paper not pruning out values that aren't needed anymore.
It would be very interesting if you could post a simple algorithm that is representative of the ones that require the full power of DPROG -- if any simple one exists.
It could be very interesting to extend Darius's system to support such an algorithm and do the pruning of old values. I've often heard of C++ being favoured over Lisp in these applications for performance reasons, and I'm very curious to measure the actual performance difference on a realistic example. I don't know off hand whether to put my money on a factor of two, ten, or a hundred. Enquiring minds want to know :-)
|
|
Thomas Mailund - Re: DPROG
4/14/2003; 8:39:48 AM (reads: 2444, responses: 0)
|
|
It would be very interesting if you could post a simple algorithm that is representative of the ones that require the full power of DPROG -- if any simple one exists.
The affine gap cost alignment example in the tutorial is actually such an example. (I'm looking at somewhat more complicated algorithms but the main difference is more tables and more bookkeeping--the essential stuff is found in the example).
There is three matrices that refer to each other, but in such a way that it is never necessary to store more than two rows of each to calculate all the cells, thus reducing the memory usage from O(N^2) to O(N) where N is the length of the sequences (which can easily be in the tens or hundreds of thousands). (In this example you can save a little more memory than needed to store the three times two rows, but you will still have linear memory usage so I don't bother with it).
In the general case you will have more matrices and you will need to store k rows for each matrix (where k may vary with the matrix as the rows in one matrix depend on the needs of other matrices).
It is not really that hard to calculate the number of rows I need to store for each matrix, it is just cumbersome to have to do it every time I change the algorithm. My hope is, that through DPROG it will be easier to prototype algorithms, and that the performance of the generated code is not too far from hand-optimized code. (At this point I have no idea about the performance of the generated code--I don't want to consider performance before I have the functionality).
As for relative performance between the generated C++ and any lisp implementation, I have no idea. I wouldn't be supprised if they were in the same ballpark if the lisp code is compiled. But, as I said, I have no idea.
/mailund
|
|
Kragen Sitaker - Re: DPROG
4/15/2003; 12:19:25 PM (reads: 2504, responses: 0)
|
|
If you can prove that your algorithm only benefits from a certain maximum amount of storage (or number of entries), you can modify the memoizer to throw away its cache of memoized values and start with a new, empty cache when that amount of storage is exceeded. This results in very simple code --- hardly more than the memoizer implementations already posted --- and often speeds your program up enough that you don't need more sophisticated garbage collection algorithms, like LRU or algorithmic analysis to determine what your program will need in the future.
Python's re module uses this system, for example.
|
|
Thomas Mailund - Re: DPROG
4/20/2003; 10:39:54 PM (reads: 2317, responses: 0)
|
|
If you can prove that your algorithm only benefits from a certain maximum amount of storage (or number of entries), you can modify the memoizer to throw away its cache of memoized values and start with a new, empty cache when that amount of storage is exceeded.
But I need to make sure that I never, ever, have a cache miss. If I have a cache miss I need to recalculate the missing value. Potentially this could change the complexity of the algorithm from, say, squared or cubed to exponential.
It is not enough to know the bound on cells I need to store, I also need to known which cells to store. If I know that, there is no need for the memorization approach; I can use dynamic programming.
This results in very simple code --- hardly more than the memoizer implementations already posted --- and often speeds your program up enough that you don't need more sophisticated garbage collection algorithms, like LRU or algorithmic analysis to determine what your program will need in the future.
I don't really need any fancy GC algorithm since I already know which values will be used and which will not from the program analysis. So I don't really use garbage collection, I just index into tables modulo some known bound.
/mailund
|
|
|
|