It is useful to step back to basic principles to model this scenario sufficiently abstractly. In particular, I want to have no particular notion of a database, except that it is a place that one stores things one learns. So I will define abstract types DB
for storage and Datum
for the input.
> import Control.Monad.Reader -- Ignore for now, imports are just required up top
>
> data Datum -- Essentially a row of the source database, if row-based
> data DB -- The target database
An Import
is a function that takes just one additional datum and incorporates all the learned knowledge into the database.
> type Import = Datum -> DB -> DB
Not just any function is suitable to be an Import
. Learning from the same datum twice – in this setting – should not result in more new information. In order words for f :: Import
we require that f
be idempotent.
f datum == f datum . f datum
Moreover, if we treat the analogy of database contents with knowledge very strictly, we should be unable to ever "know" a contradiction. We can add information, but never remove or change it. This idea is usuall discussed as the "information ordering" but I will just call it <=
where no information (aka NULL
or ⊥) is less than any value, and all other values are related only to each other.
db <= f datum db
Note that this is stronger than monotonicity, as a constant function is always monotonic. The best word for this property in the context of databases is that f
is consistent.
Knowing these properties of an import, I can be assured that it is safe to run the import on all the data available as many times as I like. The order may effect how many runs it takes, since we do not require commutativity:
f datum1 . f datum2 =?= f datum2 . f datum1
However, we can be assured that re-running will eventually hit a fixed point. In practice, it is usually very easy to order an import so that a single run suffices, two in more complex scenarios.
There are two equivalent yet legitimately interesting ways to think about importing a list of data. The first is the obvious one: For each piece of data, transform the database and pass on the result.
> importAll :: Import -> [Datum] -> DB -> DB
> importAll importOne datums db = foldr importOne db datums
The second considers functions DB -> DB
to be more central, and does not even bind the variable db
in its definition: It first composes all the single transformations into one mondo transformation, which is then applied to the input database.
> importAll' :: Import -> [Datum] -> DB -> DB
> importAll' importOne datums = foldr (.) id (map importOne datums)
The above definition is complete and flexible, but there is more structure to most databases, hence most imports. To model the extremely likely scenario that the database has an atomic element, such as rows for SQL or documents for various flavors of NoSQL, call these things Conclusion
s with a single fundamental operation save
.
> data Conclusion
>
> save :: Conclusion -> DB -> DB
> save = undefined
With the expected idempotency condition that save con == save con . save con
.
Now a typical import will consist of drawing some set of Conclusion
s for each Datum
encountered, possibly by combining the datum with information already stored in the database. For lack of a better name, I will call this a Deduction
, and transform it into an Import
with the help of save
.
> type Deduction = Datum -> DB -> [Conclusion]
>
> deduce :: Deduction -> Import
> deduce upsertion datum db = foldr save db (upsertion datum db)
However, it may be that things are even simpler and that each Datum
results in a single new Conclusion
. This is more the usual notion of a "data import" and means – to stretch an already thin analogy – that each Datum
is sort of already a Conclusion
but with respect to the wrong context, so I’ll call this a Translation
.
> type Translation = Datum -> DB -> Conclusion
>
> translate :: Translation -> Import
> translate trans datum db = save (trans datum db) db
However, there is a major problem: Neither translate
nor deduce
result in consistent imports, because multiple Datum
s may result in the a Conclusion
with the same primary key but different attributes. This is almost never desirable; when two translations or deductions emit a conclusion with the same primary key, it is intended to be consistent with the database, i.e. they should only emit conclusions con
such that save con
is consistent
db <= save con db
In normal database parlance, this is like an "upsert" except on all attributes as well. At the level of rows or documents, we must first fetch the document that would be created, and then modify it according to the new conclusion. Any conflicting attributes is an error (hacks excepted, of course). I will break these apart into Lookup
and Augment
, which are then recombined in the brilliantly named lookupAndAugment
.
> type Lookup = Datum -> DB -> Conclusion
> type Augment = Datum -> DB -> Conclusion -> Conclusion
>
> lookupAndAugment :: Lookup -> Augment -> Translation
> lookupAndAugment lookup augment datum db = augment datum db (lookup datum db)
One could implement lookupAndAugment
to enforce that the output conclusion is consistent with the input. ^{1}
To get a step closer to the imperative scripting that this prototype targets, this section adjusts the definitions above to stop passing the database around quite so much.
A first step is to note that the database is "always there" as part of the environment, which is exactly what the Reader
monad represents. Here are all of the above definitions rewritten without ever taking the database as input.
> type Import' = Datum -> Reader DB DB
> type Translation' = Datum -> Reader DB Conclusion
> type Lookup' = Translation'
> type Augment' = Datum -> Reader DB (Conclusion -> Conclusion)
>
> save' :: Conclusion -> Reader DB DB
> save' = undefined
>
> translate' :: Translation' -> Import'
> translate' trans datum = do db <- ask
> conclusion <- trans datum
> save' conclusion
>
> lookupAndAugment' :: Lookup' -> Augment' -> Translation'
> lookupAndAugment' lookup augment datum = do current <- lookup datum
> improvements <- augment datum
> return (improvements current)
The next and final step is to stop returning the database, but mutate it in place, by replacing Reader DB
with IO
and DB
with ()
.
> type Import'' = Datum -> IO ()
> type Translation'' = Datum -> IO Conclusion
> type Lookup'' = Translation''
> type Augment'' = Datum -> IO (Conclusion -> Conclusion)
>
> save'' :: Conclusion -> IO ()
> save'' = undefined
>
> translate'' :: Translation'' -> Import''
> translate'' trans datum = do conclusion <- trans datum
> save'' conclusion
>
> lookupAndAugment'' :: Lookup'' -> Augment'' -> Translation''
> lookupAndAugment'' lookup augment datum = do current <- lookup datum
> improvements <- augment datum
> return (improvements current)
And it is nice to see that moving from Reader DB
to IO
does not change the text of lookupAndAugment
, so I have some confidence that it is a canonical definition.
That’s it! Just a bit of how I do typed functional prototyping before committing to the task of implementing in a lower-level scripting language.
]]>I recently gave a little demonstration entitled "What is Defunctionalization?" for UCSC TWIGS (the acronym, stolen from a similar seminar in the the U. Mass. math department, stands for The "What Is … ?" Graduate Seminar). The inspiration for this talk was just to present what I’d learned after Conor McBride’s brilliant presentation at POPL’08 drove me to put the words "Olivier Danvy defunctionalize continuation" into Google.
I coded the simplest examples from
in literate Haskell for the audience, and also showed off QuickCheck a little to make sure the translation was correct (finding one error, if I recall).
This blog post is a merging of my talk outline and new stuff that came up live. Try loading it up in GHCi or Haskell-mode and running the examples and QuickCheck properties.
> {-# LANGUAGE RankNTypes #-}
> import Prelude hiding (reverse)
> import Control.Monad
> import Control.Monad.Cont
> import Test.QuickCheck
Broadly, defunctionalization is transforming a program to eliminate higher-order functions. Rather than focus on its use for compilation (see this H Cejtin, S Jagannathan, S Weeks paper on MLTon) or analyses (see Firstify from N Mitchell and C Runciman). I wanted to emphasize its use in understanding your own program, along the lines of Wand’s Continuation-Based Program Transformation Strategies (JACM 1980).
Here is the first example from Danvy.
> aux1 :: (Int -> Int) -> Int
> aux1 f = f 1 + f 10
>
> main1 x y b = (aux1 (\z -> x + z)) *
> (aux1 (\z -> if b then y + z else y - z))
Defunctionalization replaces all the first-class functions with an explicit data structure Lam1 and a global apply1 function, essentially embedding a mini-interpreter for just those lambda terms occurring in the program.
> data Lam1 = Lam1A Int -- (Lam1A x) ~ (\z -> x + z)
> | Lam1B Int Bool -- (Lam1B y b) ~ (\z -> if b then y + z else y - z)
>
> apply1 :: Lam1 -> Int -> Int
> apply1 (Lam1A x) z = x + z
> apply1 (Lam1B y b) z = if b then y + z else y - z
>
> aux1defun :: Lam1 -> Int
> aux1defun f = (apply1 f 1) + (apply1 f 10)
>
> main1defun x y b = (aux1defun (Lam1A x)) *
> (aux1defun (Lam1B y b))
Is it correct? Ask quickcheck:
> prop1 x y b = (main1 x y b == main1defun x y b)
*Main> quickCheck prop1 +++ OK, passed 100 tests.
Next example: flattening a binary tree
> data BinaryTree a = Leaf a
> | Node (BinaryTree a) (BinaryTree a)
First, there is the straightforward, inefficient version of flatten
> flatten (Leaf x) = [x]
> flatten (Node t1 t2) = (flatten t1) ++ (flatten t2)
We can represent our output with difference lists offering O(1) append.
> flatten' t = walk t []
> where walk (Leaf x) = (x:)
> walk (Node t1 t2) = (walk t1) . (walk t2)
And now that we’ve introduced a bunch of first-class functions, let’s see what happens when we defunctionalize them.
> data LamBTree a = LamBTreeA a -- (x:)
> | LamBTreeB (LamBTree a) (LamBTree a) -- (u . v)
>
> applyBTree (LamBTreeA x) l = x:l
> applyBTree (LamBTreeB u v) l = applyBTree u (applyBTree v l)
>
> flatten_defun t = applyBTree (walk t) []
> where walk (Leaf x) = LamBTreeA x
> walk (Node t1 t2) = LamBTreeB (walk t1) (walk t2)
Note how LamBTree looks just like the definition of BinaryTree, because it is a catamorphism, hence a hylomorphism, i.e. a recursive function with a call tree that looks like a BinaryTree or whatever structure you are hylo’ing over. (See Sorting morphisms for a beautiful examples of using this to understand your program (L Augusteijn. AFP 1998)). So walk is pretty much the identity function on trees, and then applyBTree is a flatten function with an accumulating parameter. Ignoring the intermediate structure, then we see defunctionalization as a way to derive accumulating parameters.
> prop_flatten :: BinaryTree Int -> Bool
> prop_flatten t = (flatten_defun t == flatten t)
*Main> quickCheck prop_flatten OK, passed 100 tests
Defunctionalization as an inverse to church encoding
This was possibly my favorite part of Danvy’s paper, but I unfortunately had to elide it from my talk as being slightly too esoteric for the mixed crowd.
Let us suppose that tuples were not built in to Haskell but we needed to make them ourselves. The data type and destructors would look like this:
> data MyPair a b = P a b
> deriving (Eq,Show)
>
> myPair x y = P x y
> myFst (P x y) = x
> mySnd (P x y) = y
And we can ask Quickcheck to test extensionality.
> prop_pair :: MyPair Int Int -> Bool
> prop_pair p = (p == myPair (myFst p) (mySnd p))
*Main> quickCheck prop_pair +++ OK, passed 100 tests.
So, if you recall whatever lambda-calculus course you may have taken, we can represent them with only functions. I like having rank-N type available for this.
> type ChurchPair a b = (forall c . a -> b -> (a -> b -> c) -> c)
> churchPair x y = (\operation -> operation x y)
> churchFst p = p (\x y -> x)
> churchSnd p = p (\x y -> y)
Rather than make an Arbitrary instance, I’ll settle for reduction rules in this case.
> prop_churchPair :: Int -> Int -> Bool
> prop_churchPair x y = (x == churchFst (churchPair x y)) &&
> (y == churchSnd (churchPair x y))
*Main> quickCheck prop_churchPair +++ OK, passed 100 tests.
But now I’ve introduced a bunch of higher-order functions, so we just have to see how they defunctionalize!
> data LamSelector = LamFst | LamSnd
> applySelector (LamFst) x y = x
> applySelector (LamSnd) x y = y
>
> data LamPair a b = LamPair a b
> applyPair (LamPair x y) operation = applySelector operation x y
>
> defunPair x y = (LamPair x y)
> defunFst p = applyPair p LamFst
> defunSnd p = applyPair p LamSnd
You can see that this just reproduces the original, modulo inlining!
Now let’s church-encode BinaryTree as its own fold (or catamorphism, if you like). So we have something to defunctionalize, let’s write churchDepth to calculate the depth of the tree.
> type ChurchTree a = forall c. (a -> c) -> (c -> c -> c) -> c
> churchLeaf x = \onLeaf onNode -> onLeaf x
> churchNode t1 t2 = \onLeaf onNode -> onNode (t1 onLeaf onNode) (t2 onLeaf onNode)
>
> churchFold onLeaf onNode t = t onLeaf onNode
> churchDepth t = t (\x -> 0) (\d1 d2 -> 1 + (d1 `max` d2))
Acknowledging that the church encoding is just the fold, we can defunctionalize the fold over any functor to recover the data type. Anyhow…
> data LamLeaf = LamLeaf -- onLeaf
> applyLeaf LamLeaf x = 0
>
> data LamNode = LamNode -- onNode
> applyNode LamNode d1 d2 = 1 + (d1 `max` d2)
>
> data LamTree a = LamTreeLeaf a -- churchLeaf
> | LamTreeNode (LamTree a) (LamTree a) -- churchNode
>
> applyTree (LamTreeLeaf x) onLeaf onNode = applyLeaf onLeaf x
> applyTree (LamTreeNode t1 t2) onLeaf onNode =
> applyNode onNode (applyTree t1 onLeaf onNode)
> (applyTree t2 onLeaf onNode)
> depth_defun t = applyTree t LamLeaf LamNode
applyTree is just fold over a tree, as promised, and we’ve recovered the tree data structure. Defunctionalize your continuation
Suppose you have a first-order (boring!) program. You can’t have any fun until you find a way to introduce some first-class functions. A classic way to introduce a gratuitous number is to convert your code into continuation-passing style. Let’s try it.
This is Danvy’s ‘s example of a parser to recognize the language 0^n 1^n. It is written with an auxiliary function in the Maybe monad to simulate throwing an exception as soon as we can reject the string.
> recognize :: [Int] -> Bool
> recognize xs = case walk xs of Just [] -> True
> _ -> False
> where
> walk :: [Int] -> Maybe [Int]
> walk (0:xs') = do remaining <- walk xs'
> case remaining of (1:ys) -> Just ys
> _ -> Nothing
> walk xs = Just xs -- otherwise
The audience insisted that I test this, because it is a bit of a weird way to write a trivial function.
> prop_recognize1 n = recognize (take n (repeat 0) ++ take n (repeat 1))
> prop_recognize2 m n = (m >= 0 && n >= 1 && m /= n)
> ==> (not (recognize (take m (repeat 0) ++ take n (repeat 1))))
Main> quickCheck prop_recognize1 +++ OK, passed 100 tests.Main> quickCheck prop_recognize2 +++ OK, passed 100 tests.
Now if we CPS it, we no longer need the Maybe because we can just discard the continuation and return False.
> recognize' :: [Int] -> Bool
> recognize' xs = walk xs (\xs' -> [] == xs')
> where
> walk :: [Int] -> ([Int] -> Bool) -> Bool
> walk (0:xs') k = walk xs' (\rem -> case rem of (1:ys) -> k ys
> _ -> False)
> walk xs k = k xs -- otherwise
And defunctionalizing it
> data Continuation = ContToplevel -- (\xs' -> null xs')
> | ContRecurse Continuation -- (\remaining -> ... )
>
> applyCont ContToplevel l = (l==[])
> applyCont (ContRecurse k) l = case l of (1:ys) -> applyCont k ys
> _ -> False
>
> recognize'' xs = walk xs ContToplevel
> where
> walk (0:xs') k = walk xs' (ContRecurse k)
> walk xs k = applyCont k xs
But now we can look and see that the continuation data structure is just implementing natural numbers, so we replace it by an Int.
> applyNumCont 0 [] = True
> applyNumCont k (1:xs) = applyNumCont (k-1) xs
> applyNumCont _ _ = False
> recognize_final xs = walk xs 0
> where walk (0:xs') k = walk xs' (k+1)
> walk xs k = k xs
We get the program we should have written in the first place.
I didn’t get to the rest of this in my talk, and anyhow it is most interesting to people who play with operational semantics a lot. This last bit is from Danvy’s paper On Evaluation Contexts, Continuations, and The Rest of Computation from the continuation workshop in 2004.
We have a simple arithmetic language, and two ways of giving it a semantics: We can either reduce the expression a single small step, using reduceAllTheWay to normalize it, or we can eval the expression directly to a result.
> data Exp = Value Int
> | Add Exp Exp
>
> reduce :: Exp -> Exp
> reduce (Add (Value v1) (Value v2)) = Value (v1 + v2)
> reduce (Add (Value v1) e2 ) = Add (Value v1) (reduce e2)
> reduce (Add e1 e2 ) = Add (reduce e1) e2
>
> reduceAllTheWay :: Exp -> Int
> reduceAllTheWay (Value v) = v
> reduceAllTheWay e = reduceAllTheWay (reduce e)
>
> eval :: Exp -> Int
> eval (Value v) = v
> eval (Add e1 e2) = (eval e1) + (eval e2)
Now we CPS both of them
> reduceCPS :: Exp -> (Exp -> a) -> a
> reduceCPS (Add (Value v1) (Value v2)) k = k (Value (v1 + v2))
> reduceCPS (Add (Value v1) e2 ) k = reduceCPS e2 (\e -> k (Add (Value v1) e))
> reduceCPS (Add e1 e2 ) k = reduceCPS e1 (\e -> k (Add e e2))
>
> reduceAllTheWayCPS :: Exp -> (Int -> a) -> a
> reduceAllTheWayCPS (Value v) k = k v
> reduceAllTheWayCPS e k = reduceCPS e (flip reduceAllTheWayCPS k)
> evalCPS :: Exp -> (Int -> a) -> a
> evalCPS (Value v) k = k v
> evalCPS (Add e1 e2) k = evalCPS e1 (\v1 -> evalCPS e2 (\v2 -> k (v1 + v2)))
CPS code is easier to read when I write it like this
> evalCPS' :: Exp -> (Int -> a) -> a
> evalCPS' (Value v) k = k v
> evalCPS' (Add e1 e2) k = evalCPS e1 (\v1 ->
> evalCPS e2 (\v2 ->
> k (v1+v2) ))
and I might as well admit that Haskell already has this in the bag
> evalCPS'' :: Exp -> Cont a Int
> evalCPS'' (Value v) = return v
> evalCPS'' (Add e1 e2) = do v1 <- evalCPS'' e1
> v2 <- evalCPS'' e2
> return (v1 + v2)
Now we defunctionalize both semantics
> data ContReduce = ContReduce -- (flip reduceAllTheWay)
> | ContAddV1 Exp ContReduce -- (\e -> Add (Value v1) e)
> | ContAddE2 ContReduce Exp -- (\e -> Add e e2)
>
> applyContReduce :: ContReduce -> Exp -> Exp
> applyContReduce ContReduce e = e
> applyContReduce (ContAddV1 v1 k) e = applyContReduce k (Add v1 e)
> applyContReduce (ContAddE2 k e2) e = applyContReduce k (Add e e2)
>
> reduceCPSdefun :: Exp -> ContReduce -> Exp
> reduceCPSdefun (Add (Value v1) (Value v2)) k = applyContReduce k (Value (v1 + v2))
> reduceCPSdefun (Add (Value v1) e2 ) k = reduceCPSdefun e2 (ContAddV1 (Value v1) k)
> reduceCPSdefun (Add e1 e2 ) k = reduceCPSdefun e1 (ContAddE2 k e2)
The data type ContReduce is the now-common notion of an "evaluation context" which some researchers prefer because it separates the important rules about how terms are reduced from the rules that just tell you where in a term reduction happens.
Next…
> data ContEval = ContEval -- toplevel
> | ContE1 ContEval Exp
> | ContE2 Int ContEval
>
> applyContEval :: ContEval -> Int -> Int
> applyContEval (ContEval) v = v
> applyContEval (ContE1 k e2) v1 = evalCPSdefun e2 (ContE2 v1 k)
> applyContEval (ContE2 v1 k) v2 = applyContEval k (v1 + v2)
>
> evalCPSdefun (Value v) k = applyContEval k v
> evalCPSdefun (Add e1 e2) k = evalCPSdefun e1 (ContE1 k e2)
Hey it looks like almost the same thing! The difference is in how we interpret the data structure. In the previous case, applyContReduce just used it for navigation. In this case, applyContEval calls back into evalCPSdefun to keep the evaluation rolling.
If, like myself, you liked this because you feel there is important and interesting structure underlying operational semantics that is hidden by its many superficial forms, then you’ll probably like this additional reading:
* Modularity and implementation of mathematical operational semantics. M Jaskelioff, N Ghani, G Hutton. MSFP 2008.
* Fold and unfold for program semantics. G Hutton. ICFP 1998.
And I leave you with my hidden instances of arbitrary…
> instance Arbitrary a => Arbitrary (BinaryTree a) where
> arbitrary = oneof [liftM Leaf arbitrary, liftM2 Node arbitrary arbitrary]
> instance (Arbitrary a, Arbitrary b) => Arbitrary (MyPair a b) where
> arbitrary = liftM2 myPair arbitrary arbitrary
]]>But having just finished a post using open recursion, it immediately cried out to me that open-recursive functions already have some debugging hooks for tracing/breakpoints/etc. Naturally, some complications arose, and I got to try out some other cool ideas from the literature.
To combine the State
in which I store the memoization table with the IO
I use for debugging, I use
And then to reduce the plumbing overhead I use
This post is, as usual, a literate Haskell file so load it up in GHCi or Emacs Haskell-mode and see what happens.
> {-# LANGUAGE TypeOperators,ScopedTypeVariables,PatternSignatures,RankNTypes,FlexibleInstances,UndecidableInstances,OverlappingInstances,IncoherentInstances,MultiParamTypeClasses,FlexibleContexts #-}
> import qualified Data.Map as M
> import Control.Monad.State hiding (fix)
Here’s the previous example of a monadified, open-recursion fibonacci,
> type Gen a = (a -> a)
>
> fix :: Gen a -> a
> fix f = f (fix f)
>
> gmFib :: Monad m => Gen (Int -> m Int)
> gmFib recurse 0 = return 0
> gmFib recurse 1 = return 1
> gmFib recurse n = do a <- recurse (n-1)
> b <- recurse (n-2)
> return (a + b)
… and the memoization mixin
> type Memoized a b = State (M.Map a b)
>
> memoize :: (Ord a) => Gen (a -> Memoized a b b)
> memoize self x = do memo <- check x
> case memo of
> Just y -> return y
> Nothing -> do y <- self x
> store x y
> return y
>
> check :: Ord a => a -> Memoized a b (Maybe b)
> check x = do memotable <- get
> return (M.lookup x memotable)
>
> store :: Ord a => a -> b -> Memoized a b ()
> store x y = do memotable <- get
> put (M.insert x y memotable)
>
> runMemo :: Ord a => Memoized a b c -> c
> runMemo m = evalState m M.empty
>
> fibMemo n = runMemo $ fix (gmFib . memoize) n
So let us begin debugging. The first thing that comes to mind is viewing the results of each recursive call.
> inspect :: (Show a, Show b) => String -> Gen (a -> IO b)
> inspect name self arg = do result <- self arg
> putStrLn $ name ++ " " ++ show arg
> ++ " = " ++ show result
> return result
> fibInspect n = fix (gmFib . inspect "fib") n
*Main> fibInspect 5 fib 1 = 1 fib 0 = 0 fib 1 = 1 fib 2 = 1 fib 3 = 2 fib 0 = 0 fib 1 = 1 fib 2 = 1 fib 1 = 1 fib 0 = 0 fib 1 = 1 fib 2 = 1 fib 3 = 2 fib 4 = 3 5
That was easy! Now when I also mix in the memoization I should see a lot of those recursive calls drop away. But I cannot simply write fix (gmFib . inspect "fib" . memoize)
because mixing in inspect
fixes the underlying monad to IO
, while mixing in memoize
fixes it to Memoized Int Int
. I need to run this computation in a monad that supports the operations of both IO
and State
. Well, in category theory terms, the smallest "thing" that contains two other "things" is their coproduct, so this is exactly what the Luth-Ghani paper mentioned above is for!
I’ll be inlining and de-generalizing a bunch of the (beautiful) code from the paper to make it look more like something an "in the trenches" programmer would write.
> data Plus m1 m2 a = T1 (m1 (Plus m1 m2 a))
> | T2 (m2 (Plus m1 m2 a))
> | Var a
This data type is not exactly the coproduct, but rather a data type that can represent it, like using a list to represent a set — there are more lists than sets, but if you respect the abstraction you are OK. Most of the ways of processing this data structure can be written in Haskell using only Functor
instances for the underlying structure, but to make sure we only use it in the appropriate places I’ve just made the stronger requirement that m1
and m2
be Monad
s everywhere. But I still want fmap
so I turn on undecidable instances and add the following.
> instance Monad m => Functor m where
> fmap f m = m >>= (return . f)
Now you might ask why I’m not using monad transformers. Four reasons come to mind:
This is now one of those structures that is so abstract that you can figure out how to process it just by writing the only function of the proper type.
> fold :: (Monad f1, Monad f2) => -- fold by cases over Plus
> (a -> b) -> -- variables
> (f1 b -> b) -> -- bits from f1
> (f2 b -> b) -> -- bits from f2
> Plus f1 f2 a -> -- the input
> b -- Yay!
> fold e f1 f2 (Var x) = e x
> fold e f1 f2 (T1 t) = f1 (fmap (fold e f1 f2) t)
> fold e f1 f2 (T2 t) = f2 (fmap (fold e f1 f2) t)
> instance (Monad m1, Monad m2) => Monad (Plus m1 m2) where
> return x = Var x
> m >>= f = fold f T1 T2 m
The functor instance induced by the monad would look like this
instance (Monad m1, Monad m2) => Functor (Plus m1 m2) where fmap f = fold (Var . f) T1 T2
Here fmap
traverse the shapes of m1
and m2
and applies f
where it finds a Var
constructor. To get a better picture, try combining the bodies of fold
and fmap
:
fmap f (Var x) = Var (f x) fmap f (T1 t) = T1 (fmap (fmap f) t) fmap f (T2 t) = T2 (fmap (fmap f) t)
And then we want to be able to inject things from m1
and m2
into the coproduct.
> inl :: Monad m1 => m1 a -> Plus m1 m2 a
> inl = T1 . fmap Var
> inr :: Monad m2 => m2 a -> Plus m1 m2 a
> inr = T2 . fmap Var
> liftL :: Monad m1 => (a -> m1 b) -> (a -> Plus m1 m2 b)
> liftL f = inl . f
> liftR :: Monad m2 => (a -> m2 b) -> (a -> Plus m1 m2 b)
> liftR f = inr . f
At this point I’ve got the machinery to combine the IO
and Memoized
monads as desired, but my code would be full of inr
, inl
, liftL
and liftR
. This is where we bring in the Swierstra pearl (used and discussed all over place: See Modularity and implementation of mathematical operational semantics, Phil Wadler’s blog, a thread on haskell-cafe, and of course Lambda the Ultimate)
Again, I’m specializing all the types to Monad
for clarity/laziness but they were presented for more general functors.
> class (Monad smaller, Monad larger) => Included smaller larger where
> inject :: smaller a -> larger a
> instance Monad f => Included f f where
> inject = id
> instance (Monad f, Monad g) => Included f (Plus f g) where
> inject = inl
> instance (Monad f, Monad g) => Included g (Plus f g) where
> inject = inr
Also, since for this example I don’t use nested coproducts I’m leaving out this instance, which opens of a can of worms:
instance (Monad f, Monad g, Monad h, Included f h) => Included f (Plus g h) where inject = inr . inject
Definitely see the links above if you are curious about how this plays out.
Back to the debugging story. Here is how we modify inspect
and memoize
.
> inspectM :: (Show a, Show b, Monad m, Included IO m) => String -> Gen (a -> m b)
> inspectM name self arg = do result <- self arg
> inject $ putStrLn $ name ++ " " ++ show arg
> ++ " = " ++ show result
> return result
> memoizeM :: (Ord a, Monad m, Included (Memoized a b) m) => Gen (a -> m b)
> memoizeM self x = do memo <- inject $ check x
> case memo of
> Just y -> return y
> Nothing -> do y <- self x
> inject $ store x y
> return y
> mFibTraceMemo :: Int -> Plus (Memoized Int Int) IO Int
> mFibTraceMemo = fix (gmFib . memoizeM . inspectM "fib")
But wait, how do I run this thing? It has IO
and Memoized
layers all mixed up! Intuitively, I’m sure you believe that if I start with an empty memo table and start running an IO
that has some memoized bits in it, I can thread the memo table throughout.
In classic Haskell style, we can separate the "threading" concern from the "running" by writing an untangling function of type Plus m1 m2 a -> m1 (m2 a)
. But in fact, we don’t even need to do that much work. Discussed but not hacked up in the Luth-Ghani paper is the idea of a distributivity law, which in hacking terms means a function that just does one bit of the untangling, specifically a single "untwist" forall a. m2 (m1 a) -> (m1 (m2 a))
. If we can write an untwist function, then a fold over the monad coproduct does the rest of the untangling.
Let us make this concrete for IO
and State
.
> ioState :: IO (State s c) -> State s (IO c)
> ioState io = State $ \s -> ((do st <- io
> return (evalState st s)), s)
This function essentially corresponds to the MonadIO
instance of the StateT
monad transformer. More generally, Luth-Ghani show that when you can write one of these distributivity laws, then using the coproduct is isomorphic to using monad transformers, so I already knew this part would work out
Here is how we fold an "untwist" into an "untangle"
> distribL :: (Monad m1, Monad m2) =>
> (forall b. m2 (m1 b) -> m1 (m2 b)) -> -- A flick of the wrist
> Plus m1 m2 a -> -- A tangled skein
> m1 (m2 a) -- A silken thread
> distribL untwist = fold (return . return) join (fmap join . untwist)
It may be easier to see it written out in pointful style.
distribL untwist (Var x) = return (return x) distribL untwist (T1 t) = join (fmap (distribL untwist) t) distribL untwist (T2 t) = fmap join (untwist (fmap (distribL untwist) t))
Another way to convince yourself that your function is correct is to think… how many functions even have the necessary type? Not very many, since you need the higher-rank type for the parameter for this guy to even type check! When dealing with very abstract functions, you often gain enough via parametericity to make up for the loss in intuitive clarity.
> runMemoIO :: Plus (Memoized a b) IO b -> IO b
> runMemoIO result = evalState (distribL ioState result) M.empty
> fibTraceMemo = runMemoIO . mFibTraceMemo
Now we can visually confirm that it is not repeating any computation: *Main> fibTraceMemo 10 fib 1 = 1 fib 0 = 0 fib 2 = 1 fib 3 = 2 fib 4 = 3 fib 5 = 5 fib 6 = 8 fib 7 = 13 fib 8 = 21 fib 9 = 34 55
Note that this is a little sensitive to explicit type signatures again. When I inlined the body of mFibTraceMemo
I needed to ascribe a type to memoizeM
like so:
memoizeM’ :: Gen (Int -> Plus (Memoized Int Int) IO Int) = memoizeM
Now that the vamp is playing, let’s riff on it. How about catching calls to negative numbers?
> guardedBail :: forall a b m. (Monad m, Included (Memoized a b) m) =>
> (a -> Bool) -> Gen (a -> m b)
> guardedBail pred self arg = if pred arg then error "Forbidden!" else self arg
Or suppose we have memory consumption concerns, and we want to watch the size of the memo table?
> printSize :: forall a b m.
> (Monad m, Included (Memoized a b) m, Included IO m) =>
> Gen (a -> m b)
> printSize self arg = do result <- self arg
> size <- inject $ getSize
> inject $ putStrLn $ "Memo table size: " ++ show size
> return result
> where getSize :: Memoized a b Int = do memotable <- get
> return $ M.size memotable
When I try to separate getSize
as an independent function (which clearly it is) I get type class error message pain, so I left it in the where
clause.
> mFibSizeTrace :: Int -> Plus (Memoized Int Int) IO Int
> mFibSizeTrace = fix (gmFib . memoizeM . printSize
> . inspectM "fib" . guardedBail (<0))
> fibSizeTrace n = runMemoIO $ mFibSizeTrace n
Main> fibSizeTrace 10 fib 1 = 1 Memo table size: 0 fib 0 = 0 Memo table size: 1 fib 2 = 1 Memo table size: 2 fib 3 = 2 Memo table size: 3 fib 4 = 3 Memo table size: 4 fib 5 = 5 Memo table size: 5 fib 6 = 8 Memo table size: 6 fib 7 = 13 Memo table size: 7 fib 8 = 21 Memo table size: 8 fib 9 = 34 Memo table size: 9 55Main> fibSizeTrace (-1) * Exception: Forbidden!
Of course, we are storing all these past results that don’t matter anymore. I can certainly delete the entry that is three less than the current argument.
> clearPrev :: forall b m.
> (Monad m, Included (Memoized Int b) m) => Gen (Int -> m b)
> clearPrev self arg = do inject $ clear (arg - 3)
> self arg
> where clear :: Int -> Memoized Int b ()
> clear key = do memotable <- get
> put (M.delete key memotable)
> mFibFinal :: Int -> Plus (Memoized Int Int) IO Int
> mFibFinal = fix (gmFib . clearPrev . memoizeM . inspectM "fib"
> . guardedBail (<0) . printSize)
> fibFinal n = runMemoIO $ mFibFinal n
*Main> fibFinal 15 Memo table size: 0 fib 1 = 1 Memo table size: 1 fib 0 = 0 Memo table size: 2 fib 2 = 1 Memo table size: 3 fib 3 = 2 Memo table size: 4 fib 4 = 3 Memo table size: 4 fib 5 = 5 Memo table size: 4 fib 6 = 8 Memo table size: 4 fib 7 = 13 Memo table size: 4 fib 8 = 21 Memo table size: 4 fib 9 = 34 Memo table size: 4 fib 10 = 55 Memo table size: 4 fib 11 = 89 Memo table size: 4 fib 12 = 144 Memo table size: 4 fib 13 = 233 Memo table size: 4 fib 14 = 377 610
I have a vague feeling that a real debugging package could be made from this approach, but at if not at least today was some fun.
]]>
The “efficient algorithm” presented in the paper is, upon reflection, merely a memoized traversal of the state machine, so I combined it with a modified version of
which actually eliminated an auxilliary function from the algorithm, and made the Haskell specification of the meaning of CTL connectives clearer than my English prose! (But I’ll still explain it in English)
> {-# LANGUAGE TypeOperators,ScopedTypeVariables,PatternSignatures #-}
> import qualified Data.Map as M
> import qualified Data.Set as S
> import Data.List
> import Control.Monad
> import Control.Monad.State hiding (fix)
Here is the easy example from the memoization paper, before getting on to model checking.
To enable functional mixins, we write our functions using open recursion. Instead of a function of type a -> b
we write one of type Gen (a -> b)
and then later “tie the knot” with fix
(reproduced here for reference)
> type Gen a = (a -> a)
> fix :: Gen a -> a
> fix f = f (fix f)
The classic example they start with is fibonacci, but don’t stop reading! It is just to illustrate the technique.
> fib :: Int -> Int
> fib 0 = 0
> fib 1 = 1
> fib (n+2) = fib n + fib (n+1)
By the time you get to fib 30
it takes a dozen or seconds to return on my poor old computer. Rewrittin in open recursion as
> gFib :: Gen (Int -> Int)
> gFib recurse 0 = 0
> gFib recurse 1 = 1
> gFib recurse (n+2) = recurse n + recurse (n+1)
> fib' = fix gFib
This has essentially the same performance, up to constant factors: slow.
To enable us to store the memoization table in something like a State monad, parameterize over an underlying monad. A beautiful technique in its own right, if you ask me.
> gmFib :: Monad m => Gen (Int -> m Int)
> gmFib recurse 0 = return 0
> gmFib recurse 1 = return 1
> gmFib recurse (n+2) = do a <- recurse n
> b <- recurse (n+1)
> return (a + b)
And now we can mix in memoize
> type Memoized a b = State (M.Map a b)
> memoize :: Ord a => Gen (a -> Memoized a b b)
> memoize self x = do memo <- check x
> case memo of
> Just y -> return y
> Nothing -> do y <- self x
> store x y
> return y
> check :: Ord a => a -> Memoized a b (Maybe b)
> check x = do memotable <- get
> return (M.lookup x memotable)
> store :: Ord a => a -> b -> Memoized a b ()
> store x y = do memotable <- get
> put (M.insert x y memotable)
Here’s the final fib, which returns instantly up to at least 10000
.
> fib'' n = evalState (fix (gmFib . memoize) n) M.empty
CTL
The language for which Clarke et al give a graph-based algorithm is called CTL (Computation Tree Logic). It is a logic for specifying certain restricted kinds of predicates over the states of a state machine, for today a finite state machine.
The language of CTL formulas in Haskell looks like this:
> data CTL p = TT | FF
> | Atomic p
> | Not (CTL p)
> | And (CTL p) (CTL p)
> | AllSucc (CTL p)
> | ExSucc (CTL p)
> | AllUntil (CTL p) (CTL p)
> | ExUntil (CTL p) (CTL p)
> deriving (Eq,Ord)
Some of the formulae are simply your usual logic
TT
holds of any state
FF
never holds
Atomic p
is some atomic proposition over a state, like inequality over program variables, etc.
Not
and And
have their expected meanings
The Succ
construction let you talk about “the next” state.
AllSucc f
(respectively ExSucc
) holds of a state s
when the formula f
holds for all (respectively some) successor states.
The interesting ones though, are “Always Until” and “Exists Until”.
AllUntil f1 f2
holds of a state s
when for all prefixes of any trace starting at s
you eventually reach a state satisfying f2
, and everywhere along the way f1
holds.
ExUntil
is the existential version of that.
We can then define some more predicates like “forever in the future” and “eventually”
> allFuture f = AllUntil TT f
> existsFuture f = ExUntil TT f
> allGlobal f = Not(existsFuture(Not f))
> existsGlobal f = Not(allFuture(Not f))
Now, to apply a formula to a state machine, first I need the state machine. I’ll just represent it by its successor function.
> type Succ st = st -> [st]
And we need some interpretation of atomic formulas as predicates over states
> type Interp p st = p -> st -> Bool
And since I’m using a monadified form of computation, I will lift a bunch of operations into monads to make everything readable.
> andThen,orElse :: Monad m => m Bool -> m Bool -> m Bool
> andThen = liftM2 (&&)
> orElse = liftM2 (||)
> notM :: Monad m => m Bool -> m Bool
> notM = liftM not
> anyM,allM :: Monad m => (s -> m Bool) -> [s] -> m Bool
> allM f = liftM and . mapM f
> anyM f = liftM or . mapM f
THE ALGORITHM
In the Clarke et al paper, the algorithm is expressed by induction on the formula f
you want to check: First, label your state-space graph with all the atomic formula that hold at each state. Then, label with each the compound formula of height two that holds. Etc, etc, you are guarantee that the graph is already labelled with subformulas at each step.
Like dynamic programming, this is simply a complicated way of expressing memoization. In fact, they even use a depth-first search helper function that is completely eliminated by expressing it as a memoized function. This code is considerably shorter and, I think, clearer than the pseudocode in the paper.
Today we have fancy algorithms involving BDDs and abstraction, so I’m not claiming anything useful except pedagogically. I do wonder, though, if this code gains something through laziness. It certainly traverses the state space fewer times (but I’m sure an implementation of their algorithm would do similar optimizations).
> checkCTL :: forall p st . (Ord p, Ord st) =>
> Interp p st -> Succ st -> st -> CTL p -> Bool
> checkCTL interp succ init f =
> evalState (fix (gCheckCTL . cyclicMemoize2 False) f init) M.empty
> where
> gCheckCTL :: Monad m => Gen (CTL p -> st -> m Bool)
> gCheckCTL recurse f s = checkFormula f
> where checkFormula TT = return True
> checkFormula FF = return False
> checkFormula (Atomic p) = return (interp p s)
> checkFormula (Not f1) = notM (recurse f1 s)
> checkFormula (And f1 f2) = recurse f1 s `andThen` recurse f2 s
> checkFormula (AllSucc f1) = allM (recurse f1) (succ s)
> checkFormula (ExSucc f1) = anyM (recurse f1) (succ s)
> checkFormula (AllUntil f1 f2) = recurse f2 s `orElse`
> (recurse f1 s `andThen` allM (recurse f) (succ s))
> checkFormula (ExUntil f1 f2) = recurse f2 s `orElse`
> (recurse f1 s `andThen` anyM (recurse f) (succ s))
You notice that I cheated a little, perhaps. I have used cyclicMemoize2
instead of memoize
:
> cyclicMemoize2 :: (Ord a, Ord b) => c -> Gen (a -> b -> Memoized (a,b) c c)
> cyclicMemoize2 backEdge self x y = do memo <- check (x,y)
> case memo of
> Just z -> return z
> Nothing -> do store (x,y) backEdge
> z <- self x y
> store (x,y) z
> return z
One reason is simply that I need a curry/uncurry wrapper for my two argument monadified function. The deeper thing is that cyclicMemoize2 False
inserts a fake memoization entry while a computation is progressing. If there is ever a “back edge” in the search, it will return this dummy entry. For CTL, the auxilliary depth-first search used in the paper for AllUntil
returns False
in these cases, so I seed the memo table accordingly. This is because by the time you have recursed around a cycle, that means that the f2
you are searching for did not occur on the cycle, so it never will.
To play with it, I’ve only made a couple of examples involving stop lights (of occasionally curious colors). I’d love more, and you’ll undoubtedly find bugs if you actually run something significant.
> ex1interp p s = (p == s)
> ex1succ "Red" = ["Green"]
> ex1succ "Green" = ["Yellow"]
> ex1succ "Yellow" = ["Red"]
> ex2succ "Red" = ["Green"]
> ex2succ "Green" = ["Yellow", "Orange"]
> ex2succ "Orange" = ["Red"]
> ex2succ "Yellow" = ["Red"]
But it looks kind of OK,
Main> let ch2 = checkCTL ex1interp ex2succMain> let ch1 = checkCTL ex1interp ex1succ Main> ch1 “Red” (existsFuture (Atomic “Red”)) TrueMain> ch1 “Red” (existsFuture (Atomic “Blue”)) False Main> ch2 “Green” (ExUntil TT (Atomic “Red”)) TrueMain> ch2 “Green” (ExUntil (Atomic “Green”) (Atomic “Orange”)) True Main> ch1 “Green” (Not (AllUntil (Not (Atomic “Yellow”)) (Atomic “Red”))) TrueMain> ch1 “Green” (Not (ExUntil (Not (Atomic “Yellow”)) (Atomic “Red”))) True *Main> ch2 “Green” (Not (ExUntil (Not (Atomic “Yellow”)) (Atomic “Red”))) False
Quite fun!
]]>
StateT
and Writer
monad gadgets from the standard library and a cool "novel representation of lists" due to R Hughes. On the fractal side, I’ll try to convince you that fractals are not just cute pictures, but extremely important illustrations that the real numbers are weird. As usual, you can save this post to Fractals.lhs
and compile it with ghc --make Fractals
It seems that a couple of people have gone before me making actually useful fractal packages (the packages are more specifically for "Iterated Function Systems" and "L-systems", respectively) or prettier pictures in their blog posts.
But I didn’t let that stop me from doing a little hacking. This article is hopefully more entertaining to read that a library API. So let’s get started!
> import Data.IORef
> import Data.List
> import Graphics.Rendering.OpenGL as GL
> import Graphics.UI.Gtk as Gtk
> import Graphics.UI.Gtk.OpenGL
> import Control.Monad.Writer
> import Control.Monad.State as S
The state of the cursor is a pos
ition, dir
ection, and whether the ink
is activated (so I can move it about without drawing lines everywhere). The neutral state is at the origin, pointed due east, with the ink on.
> type Point2 = Vertex2 GLfloat
>
> data CursorState = TS { pos :: Point2
> , dir :: Float
> , ink :: Bool }
>
> neutral = TS { pos = Vertex2 0 0
> , dir = 0
> , ink = True }
Now rather than define the syntax of cursor commands and make functions for creating and consuming it, I want to embed the commands into Haskell. This is quite easy, of course.
> type Instruction = CursorState -> CursorState
>
> leftI angle s = s { dir = dir s + angle }
> rightI angle s = s { dir = dir s - angle }
> penI b s = s { ink = b }
> forwardI dist s = s { pos = move (pos s) dist (dir s) }
> where move (Vertex2 x y) dist dir = Vertex2 (x + dist * cos dir)
> (y + dist * sin dir)
A first approach to the semantics is that sequence of commands should have the state threaded through it. The type of a program would be State CursorState ()
(the unit is because there is no return value). I would then get the final state of prog
starting from the neutral state with execState program neutral
.
But I don’t actually care about the final state; I want to evaluate this program only for its side effects: Whenever I run a forward
command, it should leave a line segment if ink
was enabled. This situation is just what the Writer
monad is for. When I move from s1
to s2
, I call tell [(s1,s2)]
in order to "log" this line segment
I actually need to carry the state and the log around, so how do I combine these monads? Well, there’s a huge trail of literature to follow on that! If you are interested, Composing Monads Using Coproducts by C Lüth, N Ghani has a canonical way, and lots of references. But for today, the officially sanctioned approach is to use a monad transformer; in many practical cases this coincides with the coproduct.
So a first attempt at the type of a cursor program would be:
type CursorProgram = StateT CursorState (Writer [(Point2,Point2)]) ()
What is all this? Well, CursorState
is the state I want to pass around, and Writer [(Point2, Point2)]
is the internal monad. The type is large, but it says a lot! I have a state that is getting passed along, and a log that is being kept. The only thing I have to watch for is to use lift . tell
instead of tell
because I need to apply it to the inner Writer
.
But you shouldn’t use list append for a log in real life. In the above hypothetical definition, the log is a list, so every time computations are combined with >>=
the writer monad will invoke a potentially-costly list append operation. The log will always grow from its tail, so I can build the list backwards and it would be efficient, but there is a cooler trick (actually already available as the dlist library, named for "difference lists"), from this paper:
In a nutshell: lists and partial applications of (++)
are in bijection, so I can swap them. Here’s the definition and bijection.
> newtype List a = L ([a] -> [a])
> instance Monoid (List a) where
> mappend (L x) (L y) = L (x . y)
> mempty = L id
> inject :: [a] -> List a
> inject l = L (l++)
> recover :: List a -> [a]
> recover (L list) = list []
> singleton :: a -> List a
> singleton x = L (x:)
Notice how if I append a bunch of singletons, it is the same number of applications of :
as if I had built the list backwards. Then when I recover
the list it costs O(n), the same as efficient reversal, so the two are equally good strategies in this case. It would be best to make a newtype
for backwards lists with its own monoid instance anyhow, so the programming overhead is also the same.
Now I just wrap all the instructions to operate on this more-complicated state, adding logging to forward
.
> type CursorProgram = StateT CursorState (Writer (List (Point2,Point2))) ()
>
> liftI :: (CursorState -> CursorState) -> CursorProgram
> liftI instr = S.put . instr =<< S.get
> forward, left, right :: Float -> CursorProgram
> left angle = liftI (leftI angle)
> right angle = liftI (rightI angle)
> forward dist = do s <- S.get
> liftI (forwardI dist)
> s' <- S.get
> when (ink s) $ lift $ tell $ singleton (pos s, pos s')
> pen :: Bool -> CursorProgram
> pen b = liftI (penI b)
> run :: CursorProgram -> CursorState -> [(Point2,Point2)]
> run prog state = recover $ execWriter $ execStateT prog state
That’s not so bad, is it? So now let’s get on to some drawing.
Possibly the simplest fractal (I’m not actually sure if everyone agrees about this falling under the definition of "fractal") that already can blow your mind is the Cantor Set.
> cantor :: Int -> CursorProgram
> cantor depth = cantor' depth 1.0
> where cantor' 0 size = forward size
> cantor' n size = do cantor' (n-1) (size/3)
> pen False
> forward (size/3)
> pen True
> cantor' (n-1) (size/3)
Viewing that won’t be very interesting; it is just an excuse to talk about it. But Wikipedia has a nice image:
Take the segment latex[0, 1] and remove the center third of it, keeping the endpoints intact. Now remove the center third of each of those segments, and again, and again. Taking the intersection of all of these sets (i.e. the limit) gives the Cantor Set.
So what does it look like? Well, it isn’t empty, since every point that is ever an endpoint sticks around forever. But those aren’t the only points: Convince yourself that is in the set. I’m pretty sure this can be phrased as a coinductive proof ([link to metric coinduction]).
The classic way of understanding the Cantor Set is to use ternary digits. See if you can convince yourself that the cantor set contains every real number that doesn’t require a 1 in its ternary expansion (hint: 0.0222222… = 0.1 so 0.1 doesn’t require a 1 in its ternary expansion)
So any number made of a possibly-infinite string of 0s and 2s is in there. Sound familiar? Well, if we use "1" instead of "2" then we are talking about all possibly-infinite binary strings, which a programmer should intuitively see is all real numbers in latex[0, 1]. So the Cantor Set is, in fact, uncountable!
Next, how about the Koch curve?
> koch :: Int -> CursorProgram
> koch depth = koch' depth 1.0
> where koch' 0 size = forward size
> koch' n size = do koch' (n-1) (size/3)
> left (pi/3)
> koch' (n-1) (size/3)
> right (pi * 2/3)
> koch' (n-1) (size/3)
> left (pi/3)
> koch' (n-1) (size/3)
This curve (in the limit) is continuous everywhere but differentiable nowhere.
But what is more fun is when you stick three of them end-to-end, for Koch’s Snowflake.
> kochFlake :: Int -> CursorProgram
> kochFlake depth = do -- lining up
> pen False
> forward 1.0
> right (pi/2)
> forward (1 / (2*sqrt(3)))
> right (pi/2)
> pen True
> -- draw in a triangle shape
> koch depth
> right (2*pi/3)
> koch depth
> right (2*pi/3)
> koch depth
Here, the area is obviously finite, and yet the boundary is infinite.
Finally, there’s Heighwey’s dragon.
> heighway :: Int -> CursorProgram
> heighway depth = heighway' depth 1.0 1.0
> where heighway' 0 size parity = forward size
> heighway' n size parity = do left (parity * pi / 4)
> heighway' (n-1) (size / sqrt 2) 1
> right (parity * pi / 2)
> heighway' (n-1) (size / sqrt 2) (-1)
> left (parity * pi / 4)
This is what you get if you just keep folding a piece of paper in half in the same direction, then unfold it and set every fold to a right angle. Rather than recite facts from Wikipedia, I’ll highly recommend following the link as it is an article of rare quality. In fact, all of the articles about these curves were so unexpectedly satisfying that I ended up not feeling the need to write much.
What all of the above fractal curves except the Koch Snowflake have in common is self-similarity. The cantor set is essentially identical to each of its left and right hand sides, i.e. it is identical to the union of two scaled-down copies of itself, as the cursor-graphics code makes obvious. I said I wouldn’t talk about it, so I’ll just mention that if you write this as latexC = f_{1}(C) ∪ f_{2}(C) then f_{1} and f_{2} are the functions in "iterated function systems. I highly recommend a googling, or better yet, the book which prompted this post.
Below is the actual nitty-gritty OpenGL, Gtk, and IO code that plugs it together.
> render :: CursorProgram -> IO ()
> render fractal = do
> clear [ColorBuffer]
> loadIdentity
> color3f (Color3 1 1 1)
> translate3f $ Vector3 (-0.5) 0 0
> renderPrimitive Lines $ mapM_ vertex $ combine $ run fractal neutral
> where color3f = color :: Color3 GLfloat -> IO ()
> translate3f = translate :: Vector3 GLfloat -> IO ()
> scale3f = scale :: GLfloat -> GLfloat -> GLfloat -> IO ()
> combine = concatMap (\(start,end) -> [start,end])
> draw :: GLDrawingArea -> IO () -> IO ()
> draw canvas render = do
> -- This is all Gtk code, managing the internal structures
> glContext <- glDrawingAreaGetGLContext canvas
> glWin <- glDrawingAreaGetGLWindow canvas
> (w,h) <- glDrawableGetSize glWin
>
> -- These are OpenGL calls to scale up and use the whole canvas
> (pos, _) <- GL.get viewport
>
> -- This is again Gtk code
> glDrawableGLBegin glWin glContext
> viewport $= (pos, Size (fromIntegral w) (fromIntegral h))
> render
> GL.flush -- except this
> glDrawableSwapBuffers glWin
> glDrawableGLEnd glWin
> main = do
> initGUI
> glConfig <- glConfigNew [GLModeRGBA, GLModeDouble]
> initGL
>
> depthRef <- newIORef 0
> fractalRef <- newIORef cantor
>
> canvas <- glDrawingAreaNew glConfig
> let redraw = do depth <- readIORef depthRef
> fractal <- readIORef fractalRef
> draw canvas (render (fractal depth))
>
> onExpose canvas (\_ -> do { redraw; return True } )
>
> buttonBox <- vBoxNew False 0
>
> incrButton <- buttonNew
> Gtk.set incrButton [ buttonLabel := "More iterations." ]
> onClicked incrButton (do oldval <- readIORef depthRef
> putStrLn $ show (oldval + 1) ++ " iterations!"
> writeIORef depthRef (oldval + 1)
> redraw)
>
> decrButton <- buttonNew
> Gtk.set decrButton [ buttonLabel := "Less iterations." ]
> onClicked decrButton (do oldval <- readIORef depthRef
> putStrLn $ show (max 0 (oldval - 1)) ++ " iterations!"
> writeIORef depthRef (max 0 (oldval - 1))
> redraw)
>
> boxPackStart buttonBox incrButton PackNatural 0
> boxPackStart buttonBox decrButton PackNatural 0
>
> dummy <- radioButtonNew -- All buttons join this group
> mapM_ (\(fun,label) -> do button <- radioButtonNewWithLabelFromWidget dummy label
> onToggled button (do me <- toggleButtonGetActive button
> when me $ do writeIORef fractalRef fun
> redraw)
> boxPackStart buttonBox button PackNatural 0)
> [ (cantor, "Cantor Set")
> , (koch, "Koch Curve")
> , (kochFlake, "Koch Flake")
> , (heighway, "Heighway Dragon") ]
>
> canvasBox <- hBoxNew False 0
> boxPackStart canvasBox buttonBox PackNatural 0
> boxPackStart canvasBox canvas PackGrow 0
>
> window <- windowNew
> Gtk.set window [ containerBorderWidth := 10,
> containerChild := canvasBox ]
> onDestroy window mainQuit
>
> widgetShowAll window
> mainGUI
]]>This post is literate Haskell that will output that image, so save it to something like Congestion.lhs
and run ghc --make Congestion.lhs
. I started with the code from an old post and cut out the bits I didn’t need. The libraries used can be found here:
First, there is the usual administrivia.
> import Data.IORef
> import Data.List
> import Graphics.Rendering.OpenGL as GL hiding (map2)
> import Graphics.UI.Gtk as Gtk hiding (drawSegments, Color)
> import Graphics.UI.Gtk.OpenGL
> import System.Random
Any proper Haskell programmer will of course want to create an infinite list of random segments,
> type Point3 = Vertex3 GLfloat
>
> randomPoints :: IO [Point3]
> randomPoints = do
> xgen <- newStdGen
> ygen <- newStdGen
> let xs = randomRs (-0.9,0.9) xgen
> let ys = randomRs (-0.9,0.9) ygen
> return $ map2 (\x y -> Vertex3 x y 0) xs ys
> where map2 f first second = map (uncurry f) (zip first second)
Then renderSegments
is straightforward. I draw them very faint gray so that only in locations of congestion do we see brighter white.
> renderSegments :: [(Point3,Point3)] -> IO ()
> renderSegments segments = do
> clear [ColorBuffer]
> color3f (Color3 0.05 0.05 0.05)
> mapM_ renderSegment segments
> where
> color3f = color :: Color3 GLfloat -> IO ()
> renderSegment (start, end) = renderPrimitive LineStrip $ do vertex start
> vertex end
There are a bunch of Gtk and OpenGL calls to add, yielding drawSegments
below. The only thing to note is the setting of blendFunc
and blendAdd
which tell OpenGL to add the color that is already on a pixel to the color I’m trying to draw. It is really a cheap trick to get OpenGL to intersect my line segments and add up the totals for me. One gotcha that held me up is that these settings have to be within the glDrawableGLBegin
and glDrawableGLEnd
calls.
> drawSegments :: GLDrawingArea -> [(Point3,Point3)] -> IO Bool
> drawSegments canvas segments = do
>
> -- This is all Gtk code, managing the internal structures
> glContext <- glDrawingAreaGetGLContext canvas
> glWin <- glDrawingAreaGetGLWindow canvas
> (w,h) <- glDrawableGetSize glWin
>
> glDrawableGLBegin glWin glContext
> (pos, _) <- GL.get viewport
> viewport $= (pos, Size (fromIntegral w) (fromIntegral h))
> blend $= Enabled
> blendFunc $= (One, One)
> blendEquation $= FuncAdd
> renderSegments segments
> GL.flush -- except this
> glDrawableSwapBuffers glWin
> glDrawableGLEnd glWin
> return True
And finally we just plug it all together with even more initialization code.
> main = do
> initGUI
> glConfig <- glConfigNew [GLModeRGBA, GLModeDouble]
> initGL
> startPoints <- randomPoints
> endPoints <- randomPoints
> let segments = take 10000 $ zip startPoints endPoints
>
> canvas <- glDrawingAreaNew glConfig
> onExpose canvas (\_ -> drawSegments canvas segments)
>
> window <- windowNew
> Gtk.set window [ containerBorderWidth := 10,
> containerChild := canvas ]
> onDestroy window mainQuit
>
> widgetShowAll window
> mainGUI
]]>KnuthBendix.lhs
and compile with ghc --make KnuthBendix
or load it up with ghci KnuthBendix.lhs
> module Main where
> import Control.Monad
> import Data.List
> import Test.QuickCheck hiding (trivial)
To give a little background, something I realize I’ve neglected in the past, Knuth-Bendix completion is a technique in universal algebra, which is essentially the study of unityped syntax trees for operator/variable/constant expression languages, like these:
> data Term op a = Operator op [Term op a]
> | Variable String
> | Constant a
Your usual algebraic structures are for the most part special cases in universal algebra – anything that has an ambient set with some bunch of operators and equational axioms qualifies, and universal algebra supplies the variables to represent unspecified quantities.
For example, a monoid is a set S with an operator * and a special constant e obeying these axioms, where x, y, z are variables that can be replaced by any term.
Aside: Consider the obvious way to axiomatize a group in this framework. I think it is a nice example of the interaction of constructive logic and computation.
But anyhow today I’m not going to use this structure because I can explain and explore Knuth-Bendix more quickly by sticking to monoids. The full completion procedure, and its modern enhancements, works on terms with variables and uses unification where I use equality, and superposition where I use string matching. In the case of a monoid, the associative law lets me simplify the term structure from a tree to just a list, and since I’m not including variables, I deal just with words over my alphabet a
:
> type Word a = [a]
A presentation is just a formalism as above, specifying the ambient set X (here, the type parameter a
), and some equalities R called relations, written in mathematical notation as
⟨X ∣ R⟩
and in Haskell
> type Relation a = (Word a, Word a)
> type Presentation a = [Relation a]
For an easy example of a monoid and its presentation, Bool
forms a monoid using the &&
operator which has identity True
. Here is a presentation for the monoid in each notation (in general, presentations are not unique, and there’s a whole theory of messing about with them, which is exactly what we are about to do!)
⟨True, False ∣ True ∧ False = False, True ∧ False = True, False ∧ False = False, True ∧ False = False⟩
> boolAnd = [ ([True,True], [True])
> , ([True, False], [False])
> , ([False, True], [False])
> , ([False, False], [False]) ]
In this case, the equations are just the definition for &&. Another monoid you’ve certainly seen as a programmer is the free monoid over X, which looks like this:
⟨X ∣ ⟩
> freeMonoid = []
In other words, it is just lists of elements of X since there are no rules for manipulating the words. The List monad is intimately related to this monoid.
Another good example is the following – see if you can figure out what it represents before going on.
⟨x ∣ x^{n} = e⟩
Yes, it is a presentation for the monoid (in fact, group) Z_{n}, the integers mod n. You are intended to interpret the group operation as addition modulo n, x as 1, and e as the identity 0, hence x^{n} is really n. Of course, the abstractness of the presentation meshes well with this group’s other name, the “cyclic group of order n“.
So, formalism is great fun and all, but sane folk prefer for computers to deal with it if possible. Our goal is to decide whether two words are equal according to the identities in the presentation. In general, this is undecidable, see
Here are some more example presentations for groups in Haskell, so I can use them. They are mostly harvested from http://en.wikipedia.org/wiki/Group_presentation and http://en.wikipedia.org/wiki/Knuth-Bendix_completion_algorithm
The cyclic group of order n, as above:
> cyclic n = [ (take n (repeat 'x'),"") ]
The dihedral group of order 2n:
> dihedral n = [ (take n (repeat 'r'),"")
> , ("ff", "")
> , ("rfrf", "") ]
The basic thing we are going to do is interpret equalities as rewriting rules. Everywhere the left-hand side appears, we insert the right-hand side. This function rewrites just once, if it finds an opportunity. I’m not going to even try for efficiency, since I get such a kick out of writing these pithy little Haskell functions.
> rewrite :: Eq a => Relation a -> Word a -> Word a
> rewrite _ [] = []
> rewrite rel@(lhs,rhs) word@(x:rest) = maybe (x:rewrite rel rest) ((rhs ++) . snd)
> maybeRewritten
> where maybeRewritten = find ((lhs ==) . fst) (zip (inits word) (tails word))
A natural way to check if two words are equal is to reduce them until they can reduce no further, and see if these normal forms are equal.
> reduce :: Eq a => Presentation a -> Word a -> Word a
> reduce pres word = if word' == word then word else reduce pres word'
> where word' = foldr rewrite word pres
This function fully reduces a word, assuming the presentation has “good” properties like always making words smaller according to a well-founded ordering. To make sure of this, we can orient any relation according to such an ordering.
> shortlex :: Ord a => Word a -> Word a -> Ordering
> shortlex l1 l2 = if length l1 < length l2 then LT
> else if length l1 > length l2 then GT
> else lexical l1 l2
> where lexical [] [] = EQ
> lexical (x:xs) (y:ys) = case compare x y of
> EQ -> lexical xs ys
> other -> other
>
> orient :: Ord a => Relation a -> Relation a
> orient (lhs,rhs) = case shortlex lhs rhs of
> LT -> (rhs, lhs)
> _ -> (lhs, rhs)
But the results may be provably equal even though their normal forms are not, for example if I have this monoid
⟨x, y, z ∣ xy = yx = z⟩
> xyzExample = [ ("yx", "z")
> , ("xy", "z") ]
I know that "xz" == "zx"
but I cannot obviously prove it
*Main> reduce xyzExample “xz”
“xz”
*Main> reduce xyzExample “zx”
“zx”
because the proof goes through xz = xyx = zx. Knuth calls these “critical pairs” and they, in some sense, represent the point in a proof where someone had to be more clever than just cranking on the rules. But just to preview how completion works,
*Main> reduce (complete xyzExample) “xz”
“xz”
*Main> reduce (complete xyzExample) “zx”
“xz”
Voila!
We’ll also want to reduce relations and presentations. For relations, this is just reducing both sides, and orienting for good measure.
> reduceRel :: Ord a => Presentation a -> Relation a -> Relation a
> reduceRel pres (lhs,rhs) = orient (reduce pres lhs, reduce pres rhs)
To reduce a presentation, we reduce each relation with all the other relations. If the two sides reduce to the same word, then the relation was redundant and we can delete it.
> trivial :: Eq a => Relation a -> Bool
> trivial (x,y) = x == y -- Otherwise known as `uncurry (==)`
>
> redundant :: Ord a => Presentation a -> Relation a -> Bool
> redundant pres = trivial . reduceRel pres
>
> reducePres :: Ord a => Presentation a -> Presentation a
> reducePres pres = filter (not . trivial) (reduceP [] pres)
> where reduceP prior [] = prior
> reduceP prior (r:future) = let r' = reduceRel (prior++future) r
> in reduceP (r':prior) future
Now on to the Knuth-Bendix procedure. The primary idea is to find all the possible critical pairs as described above, and to make new rewrite rules so they aren’t critical anymore. In other terminology, we look for exceptions to local confluence, and patch them up.
First, partitions
is a list of ways to split a word into two nonempty parts.
> partitions :: Word a -> [(Word a, Word a)]
> partitions x = reverse . tail . reverse . tail $ zip (inits x) (tails x)
Results looks like this:
*Main> partitions “abcde” [(“a”,”bcde”),(“ab”,”cde”),(“abc”,”de”),(“abcd”,”e”)]
Next, superpositions
takes two words, and returns all the ways that the back of the first word could be the front of the second word.
> superpositions :: Eq a => Word a -> Word a -> [(Word a, Word a, Word a)]
> superpositions x y = map merge $ filter critical $ allPairs
> where critical ((a,b),(c,d)) = (b == c)
> merge ((a,b),(c,d)) = (a,b,d)
> allPairs = [(p1, p2) | p1 <- partitions x, p2 <- partitions y]
*Main> superpositions “abb” “bbc” [(“a”,”bb”,”c”),(“ab”,”b”,”bc”)]
Then criticalPairs
takes all the superpositions (x,y,z) where xy is reducible by one relation, and yz is reducible by the second, and returns the result of the aforementioned reductions. The last function, allCriticalPairs
just filters these for inequivalent pairs.
> criticalPairs :: Eq a => Relation a -> Relation a -> [(Word a, Word a)]
> criticalPairs (l1,r1) (l2,r2) = map reduceSides (superpositions l1 l2)
> where reduceSides (x,y,z) = (r1 ++ z, x ++ r2)
>
> allCriticalPairs :: Ord a => Presentation a -> [(Word a, Word a)]
> allCriticalPairs pres = filter (not . redundant pres)
> $ concatMap (uncurry criticalPairs) rels
> where rels = [(r1,r2) | r1 <- pres, r2 <- pres]
Just to save some redundant modification, we’ll assume the input presentation is reduced and oriented, and maintain that invariant ourselves with the help of this function. (Note how we run into the annoying aspect of Haskell where instances are global – so you can’t override the ordering on lists; there are good theoretical reasons for this annoyance but it still sucks!).
Then completion is pretty simple – just add the first non-reducible critical pair until there are no more.
> complete :: Ord a => Presentation a -> Presentation a
> complete pres = augment critPairs
> where augment [] = pres
> augment (x:_) = complete $ reducePres (x : pres)
> critPairs = map orient (allCriticalPairs pres)
But this version of completion simplifies the presentation at every step, as per the descriptions of the algorithm I’ve seen – I obviously can’t do that if the result is infinite. The best I can think of is to track the finite number of relations I’ve already processed, and reduce each relation as I consider it according to those. And since I bet the order that generated rewrite rules are visited matters, I use the interleave
function to make sure that all rewrite rules are eventually hit.
> quasicomplete :: Ord a => Presentation a -> Presentation a
> quasicomplete pres = augment [] pres
> where augment prior [] = []
> augment prior (x:xs) | redundant prior x = augment prior xs
> | otherwise = x':augment prior' rest
> where x' = reduceRel prior x
> prior' = reducePres (x:prior)
> rest = map (reduceRel prior') (interleave critPairs xs)
> critPairs = allCriticalPairs prior'
> interleave :: [a] -> [a] -> [a]
> interleave [] ys = ys
> interleave xs [] = xs
> interleave (x:xs) ys = x:(interleave ys xs)
And I need to directly test equality rather than reducing a relation and checking for triviality, since I don’t know when reduction is done.
> equiv :: Ord a => Presentation a -> Word a -> Word a -> Bool
> equiv pres w1 w2 = or (map equivPrefix (inits pres))
> where equivPrefix prefix = reduce prefix w1 == reduce prefix w2
The above function does not give the same completions as the finite version, but it does seem to work. Here is a quickcheck property to test it. Since interesting presentations are probably hard to autogenerate, I just specialize it to the xyzExample.
> prop_quasiXYZ :: [XYZ] -> [XYZ] -> Bool
> prop_quasiXYZ xyz1 xyz2 =
> trivial (reduceRel (complete xyzExample) (w1,w2)) ==
> trivial (reduceRel (quasicomplete xyzExample) (w1,w2))
> where w1 = map unXYZ xyz1
> w2 = map unXYZ xyz2
>
> newtype XYZ = XYZ { unXYZ :: Char }
> instance Arbitrary XYZ where
> arbitrary = liftM XYZ $ oneof [return 'x', return 'y', return 'z']
> instance Show XYZ where
> show = show . unXYZ
All seems well! Now here’s the example wikipedia uses for Knuth-Bendix:
> wikipediaExample = [ ("xxx", "")
> , ("yyy", "")
> , ("xyxyxy", "") ]
Indeed, the finite completion algorith agrees with what Wikipedia says, and the following property passes lots of tests:
> prop_quasiWiki :: [XY] -> [XY] -> Bool
> prop_quasiWiki xy1 xy2 =
> trivial (reduceRel (complete wikipediaExample) (w1,w2)) ==
> trivial (reduceRel (quasicomplete wikipediaExample) (w1,w2))
> where w1 = map unXY xy1
> w2 = map unXY xy2
>
> newtype XY = XY { unXY :: Char }
> instance Arbitrary XY where
> arbitrary = liftM XY $ oneof [return 'x', return 'y']
> instance Show XY where
> show = show . unXY
And the major test – I accidentally found out that my procedures don’t terminate for the dihedral groups greater than 3, so let’s make some tests of hand-checked equalities.
> d3 = quasicomplete (dihedral 3)
> prop_d3 :: Bool
> prop_d3 = all (uncurry $ equiv d3) [ ("rrf", "fr")
> , ("frf", "rr")
> , ("frr", "rf")
> , ("f", "rfr")
> , ("rfrff", "rrfrr") ]
If I were really fancy I’d take random walks on the rewrite rules and then see if completion could retrace those steps. But I’m not that fancy today!
Here’s my little pseudo-quickcheck main
> main = do mycheck ("prop_quasiXYZ", prop_quasiXYZ)
> mycheck ("prop_quasiWiki", prop_quasiWiki)
> mycheck ("prop_d3", prop_d3)
> where mycheck (name,prop) = do
> putStr (name ++ ": ")
> quickCheck prop
]]>Isometries.lhs
and run ghc --make Isometries
. Then check it with quickCheck +names Isometries.lhs
.
Two aspects of this post are given about equal weight:
The mathematical content is elementary and can be understood by anyone familiar with basic trigonometry, as you might learn in high school. It is inspired by the book Symmetries by DL Johnson, one of the very excellent Springer Undergraduate Mathematics Series.
The tool QuickCheck is a fairly brilliant and easy-to-use automatic testing library for Haskell. I use it to verify each step of the post. All but the first of my QuickCheck properties found real errors!
> module Main where
> import Test.QuickCheck
Now, the reflect-rotate-translate normal form is defined relative to a point P (the center of rotation) and a line L (of reflection). Concisely: f = t s r where t is a translation, s is a rotation about P, and r is a reflection about L (allowing the identity to be considered a reflection).
I will choose P = (0,0) and L = the X axis since they are simple.
> newtype Translation = Translate (Double, Double) deriving Show
> newtype Rotation = Rotate Double deriving Show
> newtype Reflection = Reflect Bool deriving Show
>
> translate (Translate (dx,dy)) (x,y) = (x+dx,y+dy)
> rotate (Rotate angle) (x,y) = (x * cos angle - y * sin angle,
> x * sin angle + y * cos angle)
> reflect (Reflect b) (x,y) = if b then (x,-y) else (x,y)
>
> type IsometryNF = (Translation, Rotation, Reflection)
> apply (t, s, r) (x,y) = (translate t . rotate s . reflect r) (x,y)
Aside from preserving distances, the other key aspect of an isometry is that it is invertible, so let’s express the invertibility of these basic isometries with a Haskell type class.
> class Invertible a where
> inverse :: a -> a
>
> instance Invertible Translation where
> inverse (Translate (dx,dy)) = (Translate (-dx,-dy))
>
> instance Invertible Rotation where
> inverse (Rotate angle) = Rotate (-angle)
>
> instance Invertible Reflection where
> inverse (Reflect b) = Reflect b
We can now express the normalForm
function. As input, it takes an arbitrary "black-box" isometry as a Haskell function (the type doesn’t enforce that the function is actually an isometry, of course). As each component of the normal form is computed, the inverse of that component is applied before calculating the next component.
> type Point2D = (Double,Double)
> type Map2D = Point2D -> Point2D
> data Isometry = Isometry Map2D
>
> normalForm :: Isometry -> IsometryNF
> normalForm (Isometry f) = (t, s, r)
> where t = translation f
> s = rotation (translate (inverse t) . f)
> r = reflection (rotate (inverse s) . translate (inverse t) . f)
The rest of this post is writing and specifying the translation
, rotation
, and reflection
helper functions. As an example, I’ve created this isometry using GeoGebra. I will maintain the convention that the source objects are blue and the output of a transformation is red.
Since reflections and rotations fix the origin, the translation is just wherever the origin gets sent.
> translation :: Map2D -> Translation
> translation f = Translate (f (0,0))
On translations, this should be the identity, and we express that fact with the first of these QuickCheck properties. The second indicates that for an arbitrary isometry f = t s r, composing with the translation’s inverse should fix the origin, because s and r leave the origin where it is: t^{-1} f = t^{-1} t s r = s r Or in pictures:
The operator =~=
is an "approximate" equality operator for floating point numbers.
> prop_translation :: Translation -> Point2D -> Bool
> prop_translation trans (x,y) = translate trans (x,y) =~=
> translate (translation (translate trans)) (x,y)
>
> prop_tInv :: Isometry -> Point2D -> Bool
> prop_tInv (Isometry f) (x,y) = (tInv . f) (0,0) =~= (0,0)
> where tInv = (translate . inverse . translation) f
To find the rotation, we pick any point on the X axis and see where it is sent after inverting the translation. A simple choice is (1,0) which will be rotated somewhere else on the unit circle.
> rotation :: Map2D -> Rotation
> rotation f = Rotate angle
> where (x,y) = f (1,0)
> yAngle = asin y
> xAngle = acos x
> angle = if yAngle > 0 then xAngle else 2*pi - xAngle
To test this function, we use extensional equality on rotation functions rather than intensional equality on the angle since rotations do not have a unique representation (our function returns a canonical representation between 0 and 2). As inverting the translation component of an isometry fixes the origin, inverting this rotation should fix the point (1,0) and by implication the entire X axis. In pictures:
> prop_rotation :: Rotation -> Point2D -> Bool
> prop_rotation rot (x,y) = rotate rot (x,y) =~=
> rotate (rotation (rotate rot)) (x,y)
> prop_sInv :: Isometry -> Point2D -> Bool
> prop_sInv (Isometry f) (x,y) = (sInv . tInv . f) (1,0) =~= (1,0)
> where tInv = (translate . inverse . translation) f
> sInv = (rotate . inverse . rotation) (tInv . f)
We have calculated t and s and now we have in hand: s^{-1} t ^{-1} f = s^{-1} t ^{-1} t s r = r
Now we just figure out the reflection r by choosing any point not on the X axis and seeing if it was reflected or not. An obvious choice is (0,1)
> reflection :: Map2D -> Reflection
> reflection f = Reflect (not (f (0,1) =~= (0,1)))
The correctness properties should be familiar by now:
> prop_reflection :: Reflection -> Point2D -> Bool
> prop_reflection refl (x,y) = reflect refl (x,y) =~=
> reflect (reflection (reflect refl)) (x,y)
>
> prop_rInv :: Isometry -> Point2D -> Bool
> prop_rInv (Isometry f) (x,y) = (rInv . sInv . tInv . f) (1,0) =~= (1,0)
> where tInv = (translate . inverse . translation) f
> sInv = (rotate . inverse . rotation) (tInv . f)
> rInv = (reflect . inverse . reflection) (sInv . tInv . f)
And we are done! To test, though, we need to tell QuickCheck how to generate isometries. I could reuse the basic isometries, but code duplication is desirable for consistency checking, so I’ll use another mathematical property to generate random isometries: they are all the composition of three reflections, which may each be the identity, of course.
Reflecting about an arbitrary line is pretty easy: translate so the line passes through the origin, rotate the line onto the horizontal axis, then reflect (sound familiar?). You can read more at Planet Math if you like, or figure out the formulae yourself with some high school trigonometry, or just let the computer compose the functions for you. Because I want to decouple my specifications and implementation, I worked out the formulae directly.
> instance Arbitrary Isometry where
> arbitrary = do refl1 <- newRefl
> refl2 <- newRefl
> refl3 <- newRefl
> return (Isometry (refl3 . refl2 . refl1))
> where newRefl = do angle <- arbitrary
> yOffset <- arbitrary
> return (reflectAbout yOffset angle)
>
> reflectAbout :: Double -> Double -> Map2D
> reflectAbout yOffset angle =
> translateY yOffset . reflectRotate angle . translateY (-yOffset)
> where translateY dy (x,y) = (x,y+dy)
> reflectRotate angle (x,y) = (x * cos (2*angle) + y * sin (2*angle),
> x * sin (2*angle) - y * cos (2*angle))
And we can use QuickCheck to test our generator. (This caught a typo in reflectAbout
)
> prop_Isometry :: Isometry -> Point2D -> Point2D -> Bool
> prop_Isometry (Isometry f) p1 p2 = distsq p1 p2 =~= distsq (f p1) (f p2)
> where distsq (x,y) (x',y') = (x-x')**2 + (y-y')**2
Then the statement of correctness for the entire algorithm is:
> prop_NF :: Isometry -> Point2D -> Bool
> prop_NF f'@(Isometry f) (x,y) = f (x,y) =~= apply (normalForm f') (x,y)
And normalForm
should also be the identity on normal forms, to check that I’ve written apply
correctly. A lot of these properties overlap so they fail together, but it doesn’t hurt to have a lot of properties.
> prop_NFNF nf (x,y) = apply nf (x,y) =~=
> apply (normalForm $ Isometry $ apply nf) (x,y)
The QuickCheck page has a script to run your tests in hugs, but I had to edit it somewhat to run it on my machine. In case you don’t want to do that, this file can just be compiled and run. Either way you run the checks, then you should see something like this:
Main> prop_translation: OK, passed 100 tests.Main> prop_tInv: OK, passed 100 tests. Main> prop_rotation: OK, passed 100 tests.Main> prop_sInv: OK, passed 100 tests. Main> prop_reflection: OK, passed 100 tests.Main> prop_rInv: OK, passed 100 tests. Main> prop_Isometry: OK, passed 100 tests.Main> prop_NF: OK, passed 100 tests. *Main> prop_NFNF: OK, passed 100 tests.
Below here is just boilerplate — end of commentary.
> main = do check ("prop_translation", prop_translation)
> check ("prop_tInv", prop_tInv)
> check ("prop_rotation", prop_rotation)
> check ("prop_sInv", prop_sInv)
> check ("prop_reflection", prop_reflection)
> check ("prop_rInv", prop_rInv)
> check ("prop_Isometry", prop_Isometry)
> check ("prop_NF", prop_NF)
> check ("prop_NFNF", prop_NFNF)
>
> where check (name,prop) = do putStr (name ++ ": ")
> quickCheck prop
>
> instance Arbitrary Translation where
> arbitrary = do dx <- arbitrary
> dy <- arbitrary
> return (Translate (dx,dy))
>
> instance Arbitrary Rotation where
> arbitrary = do angle <- arbitrary
> return (Rotate angle)
>
> instance Arbitrary Reflection where
> arbitrary = do refl <- arbitrary
> return (Reflect refl)
>
> instance ApproxEq Rotation where
> (Rotate a) =~= (Rotate a') = a =~= a'
>
> instance Show Isometry where
> show = show . normalForm -- cheating!
>
> class ApproxEq a where
> (=~=) :: a -> a -> Bool
>
> instance (ApproxEq a, ApproxEq b) => ApproxEq (a,b) where
> (x,y) =~= (x',y') = (x =~= x') && (y =~= y')
> where epsilon = 0.001
>
> instance ApproxEq Double where
> x =~= x' = (abs (x-x') < epsilon)
> where epsilon = 0.001
]]>This post is a valid literate Haskell file so save it to something like ConvexHull.lhs
and compile with ghc --make ConvexHull
. What you see above is what you’ll get when you run `./ConvexHull`
The best OpenGL tutorial for Haskell that I’ve found is this one from Michi’s blog, using GLUT to interface with X. For this tutorial we are going to use the Gtk GLDrawingArea
widget, to illustrate the differences, which can be rather hard to find in the documentation.
The libraries used can be found here:
These are thin bindings, so our code is all going to be pretty imperative.
> import Data.IORef > import Data.List > import Graphics.Rendering.OpenGL as GL > import Graphics.UI.Gtk as Gtk > import Graphics.UI.Gtk.OpenGL > import System.Random
I’ll show main first. If you are just looking for the outline of how to initialize everything and make it go, here it is:
> main = do > initGUI > glConfig initGL > > pointRef >= newIORef) > > canvas onExpose canvas (_ -> readIORef pointRef >>= drawWithHull canvas) > > > button Gtk.set button [ buttonLabel := "Generate new points." ] > onClicked button (do newPoints writeIORef pointRef newPoints > drawWithHull canvas newPoints > return ()) > > vbox boxPackStart vbox button PackNatural 0 > boxPackStart vbox canvas PackGrow 0 > > window Gtk.set window [ containerBorderWidth := 10, > containerChild := vbox ] > onDestroy window mainQuit > > widgetShowAll window > mainGUI
Now, Haskell’s OpenGL binding has some quirks with regards to numeric overloading, so it helps to define some type aliases. Since I want to take cross products I’ll work in three dimensions, and define some basic operations on my points. The OpenGL binding has separate types for points and vectors, but I’m going to abuse the point type to represent both.
> type Point3 = Vertex3 GLfloat > cross :: Point3 -> Point3 -> Point3 > cross (Vertex3 x0 y0 z0) (Vertex3 x1 y1 z1) = > Vertex3 (y0*z1 - z0*y1) (z1*x0 - x0*z1) (x0*y1 - x1*y0) > dot :: Point3 -> Point3 -> GLfloat > dot (Vertex3 x0 y0 z0) (Vertex3 x1 y1 z1) = x0*x1 + y0*y1 + z0*z1 > randomPoints :: Int -> IO [Point3] > randomPoints 0 = return [] > randomPoints n = do > x y rest return $ Vertex3 x y 0 : rest
Now for the quirks with using Gtk for OpenGL – there are many more setup calls to make. First, you need to explicitly grab a graphics context (glContext) and GL drawing window (glWin). Then, we manage the viewport manually to scale our rendering up to fill the window. Finally, there are Gtk calls to start and end OpenGL rendering calls.
It took me a while to discover them.
> drawWithHull :: GLDrawingArea -> [Point3] -> IO Bool > drawWithHull canvas points = do > > -- This is all Gtk code, managing the internal structures > glContext glWin (w,h) > -- This is again Gtk code > glDrawableGLBegin glWin glContext > > -- These are OpenGL calls to scale up and use the whole canvas > (pos, _) viewport $= (pos, Size (fromIntegral w) (fromIntegral h)) > > renderWithHull points > GL.flush -- except this > glDrawableSwapBuffers glWin > glDrawableGLEnd glWin > return True
I use the terminology “draw” to refer to Gtk drawing code, which tends to be bookkeeping, while I use “render” to refer to sequences of OpenGL calls. Here is the code to actually render the points and their convex hull. Note the color3f specialization, to help the type inferencer.
> renderWithHull :: [Point3] -> IO () > renderWithHull points = do > clear [ColorBuffer] > color3f (Color3 1 1 1) > renderPrimitive Quads $ mapM_ fatPoint $ points > color3f (Color3 1 0 0) > renderPrimitive LineStrip $ mapM_ vertex $ hull > where hull = convexHull points > color3f = color :: Color3 GLfloat -> IO () > fatPoint (Vertex3 x y z) = do > vertex $ Vertex3 (x+0.005) (y+0.005) z > vertex $ Vertex3 (x-0.005) (y+0.005) z > vertex $ Vertex3 (x-0.005) (y-0.005) z > vertex $ Vertex3 (x+0.005) (y-0.005) z
From here on, I’m just implementing the convex hull algorithm.
This is an iterative algorithm that computes the upper half-hull by travelling left-to-right across the plane making sure to always make right turns; if ever a left turn occurs, it backtracks as far as necessary, patching up the hull. I defer the obvious helper isLeftOf to the end of the file.
> upperHalfHull points = upperHalfHull' (sort points) [] > where upperHalfHull' [] hull = hull > upperHalfHull' (v:vs) [] = upperHalfHull' vs [v] > upperHalfHull' (v:vs) [y] = upperHalfHull' vs [v,y] > upperHalfHull' (v:vs) (y:x:zs) = if v `isLeftOf` (x,y) > then upperHalfHull' (v:vs) (x:zs) > else upperHalfHull' vs (v:y:x:zs)
Then the lower half of the hull does the same thing right-to-left, and I rather naively combine them into convexHull (I traverse the points maybe three times unneccessarily)
> lowerHalfHull points = map rotate180 $ upperHalfHull $ map rotate180 $ points > rotate180 (Vertex3 x y z) = Vertex3 (-x) (-y) z > convexHull :: [Point3] -> [Point3] > convexHull points = upperHalfHull points ++ lowerHalfHull points
There is a divide-and-conquer algorithm which is probably more idiomatic, and has the same asymptotic complexity (different pathological cases) but this is the one I was trying out.
This last helper function only makes sense when points are all on the z=0 plane. It takes a point and a directed line segment, and indicates whether the point lies to the left of the line defined by that segment.
> isLeftOf :: Point3 -> (Point3, Point3) -> Bool > isLeftOf (Vertex3 x2 y2 _) (Vertex3 x0 y0 _, Vertex3 x1 y1 _) = > let Vertex3 _ _ z = (Vertex3 (x1-x0) (y1-y0) 0) > `cross` > (Vertex3 (x2-x0) (y2-y0) 0) > in z > 0]]>