Notes on Introduction To Algorithms

Peteris Krumins has been posting his notes on MIT’s Introduction to Algorithms. The notes are valuable for anyone interested in working their way through the CLRS text and MIT Open Courseware videos.

I just finished watching the last lecture of MIT’s "Introduction to Algorithms" course. Having a great passion for all aspects of computing, I decided to share everything I learned...

Although not directly tied to programming languages, every PL has to eventually be able to express algorithms. Aside from Knuth, CLRS is probably the closest approximation to a comprehensive approach to algortihms. The text itself is language agnostic - the authors use their own brand of pseudo-code to describe the algorithms. This has the advantage of allowing the reader to focus on the algorithms at a higher level, rather than get bogged down in the specifics of any PL. The downside, at least in my estimation, is that the authors don't make it particularly easy to implement the algorithms in any specific PL. The pseudo code conflates common data structures (such as arrays) with properties/attributes that can be tagged with those structures. And some of the algorithms refer to variables that are outside of the scope of the function. Also, like Knuth, most of the algorithms are steeped in state, making it hard to implement them with functional programming approaches.

That said, the video lectures and the accompanying notes above are good resources for any that want to self-study CLRS. Here are the notes thus far:

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

"Steeped in state"

I'm not really familiar with purely functional data structures, but I wouldn't consider being "steeped in state" a property of the presentation, like referring to variables that are outside the scope of the function. For most algorithms, it seems that eeking out the state would essentially change the algorithm, unless they did a literal translation to monadic state, in which case, what's the point?

State all the way down

Starting from the beginning of CLRS, we have sorts of various kinds. The pseudo-code expresses sorts with loops and arrays. So, when the text says here's the general expression of a bubble sort, what is really shown is an imperative algorithm that uses mutable cells. Nothing wrong with this technique, as the algorithms are highly optimized. But there are other data structures that can be sorted. And the algorithm that works great for arrays, isn't necessarily good when applied to lists. The expression of the algorithm in recursive form can be informative. And there is not really a need to resort to state in all cases (hence, monads need not be applicable). Sorts can be expressed in both functional and imperative languages without need of state.

The pseudo-code language that is used in CLRS is strictly an imperative language. Fortunately, the analysis techniques extend beyond imperative algorithms and can be applied to algorithms in the more general sense. Because the pseudo-code is biased toward imperative languages, I would say that their language does not really capture the fundamental nature of the underlying algorithms. So the goal of avoiding any particular language is somewhat negated because it assumes a PL of a particular type.

What I'd like is to find an algorithms text which expresses a fairly wide swath of algorithms in declarative, functional, imperative, etc... styles. Texts like CTM and SICP come closer, but I don't know that they can be described as exposition on algorithms.

Fundamental Nature?

Because the pseudo-code is biased toward imperative languages, I would say that their language does not really capture the fundamental nature of the underlying algorithms.

Can an algorithm be separated from its computational model? Offhand I would say no, in which case it's not clear to me that algorithms even have a "fundamental nature" in the sense your quote suggests.

MergeSort

Since we are dealing with algorithms, I figure what better way than to pick an example - let's run with MergeSort. Here's the MergeSort as defined in the pseudo code of CLRS:
% MergeSort in CLRS
Merge(A, p, q, r)
   n1 ← q - p + 1
   n2 ← r - q
   create arrays L[1..n1 + 1] and R[1..n2 + 1]
   for i ← 1 to n1
      do L[i] ← A[p + i - 1]
   for j ← 1 to n2
      do R[j] ← A[q + j]
   L[n1 + 1] ← ∞
   L[n2 + 1] ← ∞
   i ← 1
   j ← 1
   for k ≤ p to r
      do if L[i] ≤ R[j]
         then A[k] ← L[i]
              i ← i + 1
         else A[k] ← R[j]
              j ← j + 1

MergeSort(A, p, r)
   if p < r
      then q ← ⌊(p + r)/2⌋
           MergeSort(A, p, q)
           MergeSort(A, q + 1, r)
           Merge(A, p, q, r)
So, I took that and implemented it in fairly literal form in Oz:
% MergeSort in CLRS ported to Oz
Infinity = 999999999

proc {Merge A P Q R}
   N1 = Q - P + 1
   N2 = R - Q
   Lx = {NewArray 1 N1+1 _}
   Rx = {NewArray 1 N2+1 _}
in
   for I in 1..N1 do
      Lx.I := A.(P+I-1)
   end
   for J in 1..N2 do
      Rx.J := A.(Q+J)
   end
   Lx.(N1+1) := Infinity
   Rx.(N2+1) := Infinity
   local
      I = {NewCell 1}
      J = {NewCell 1}
   in
      for K in P..R do
         if Lx.@I =< Rx.@J then
            A.K := Lx.@I
            I := @I + 1
         else
            A.K := Rx.@J
            J := @J + 1
         end
      end
   end
end

proc {MergeSort A P R}
   if P < R then
      local
         Q = (P + R) div 2
      in
         {MergeSort A P Q}
         {MergeSort A Q+1 R}
         {Merge A P Q R}
      end
   end
end
Lifting code from CTM, I see a merge sort for lists that does not require state:
% MergeSort in CTM
fun {Merge Xs Ys}
   case Xs # Ys
   of nil # Ys then Ys
   [] Xs # nil then Xs
   [] (X|Xr) # (Y|Yr) then
      if X<Y then X|{Merge Xr Ys}
      else Y|{Merge Xs Yr}
      end
   end
end

proc {Split Xs ?Ys ?Zs}
   case Xs
   of nil then Ys=nil Zs=nil
   [] [X] then Ys=[X] Zs=nil
   [] X1|X2|Xr then Yr Zr in
      Ys=X1|Yr
      Zs=X2|Zr
      {Split Xr Yr Zr}
   end
end

fun {MergeSort Xs}
   case Xs
   of nil then nil
   [] [X] then [X]
   else Ys Zs in
      {Split Xs Ys Zs}
      {Merge {MergeSort Ys} {MergeSort Zs}}
   end
end
What I am saying is that we have two different implementations of MergeSort. One that utilizes arrays and mutable cells, and another that uses lists and single assignment variables. The fundamental algorithm of merge sort should be able to be somewhat independent of the data structure (hence the drive to generics in PLs) and be capable of telling us the common methodology that is used in the two specific instances of MergeSort. Abstract Data Types and operations on those ADTs is the name of the game. Generic operations that apply to any number of ADTs is the push in the functional PLs. CLRS makes no attempt to define MergeSort in a functional style. Indeed, mutable cells are used in practically every algorithm defined in pseudo code.

Last two lectures

Forgot to update that the notes for the last two lectures are now there.