## BF Bignum: A Program Synthesis Game

BF Bignum is a program synthesis game: the goal is to synthesize a BF program smaller than a given size that outputs the largest integer possible before halting within a given time limit. A synthesizer's score is the value of the target integer divided the total number of BF operations executed in the search for the target program.

Bignum is much harder than it sounds, and requires synthesizers to learn modular and hierarchical program representations.

• Your BF program must be smaller than a given size, and halt within the given time limit. Time is defined as the number of BF operations executed during the production of the target integer. Synthesizers compete in a size/time class akin to weight classes in martial arts.
• Your BF program must use . to produce its output. Your BF program may have any number of outputs; the largest output is the one incorporated in to your synthesizer's score. Note that since there is no input, the instruction , is simply ignored.
• Your synthesizer must keep track of the total work done in the search. Work is defined as the number of BF operations executed during the search for the target program, e.g. while computing a candidate program's fitness. Note that two operations done in parallel is still two operations.

It's not clear that this is the best measure of work, and there may be borderline cases such as static analysis that may or may not be considered execution of a BF program.

• The BF tape contains big integers, not bytes. Your BF program is scored on the largest single integer it outputs; the sequence of integers is not interpreted as a string like in some BF examples.
• You are writing a program that writes a program. You are not writing the bignum program yourself, that's too easy. Of course there are ways to cheat, like including some high fitness program within the source code of your synthesizer. Don't do that.

A synthesizer's score is target / work: the number output by the target program divided by the number of BF operations performed during the search.

## How to decrease bugs in the code

After couple of years of using a Haskell, I noticed that errors in the code were not decreased (lie, as well as "you will be more productive", "you will write code faster", "code will be smaller" - all of these sentences were a lie), so I thought about it: how to decrease bugs. I know these methods:

• More unit and properties tests (it's available in most languages)
• Free monads - allows to check logic in code which looks like "imperative" without to involve real IO
In this case all functions like f :: a -> b -> IO c become f :: Ctr a -> Ctr b -> CtrIO c or something similar. But I'm not sure here, because I need to check not only pre-/post- conditions but also invariants. How they can look in Haskell where you have only spaghetti code of functions? How to code asserting conditions in those monads? I found work of Andres Loh, Markus Degen, but this does not helps me. Most of articles (Peyton-Jones, Andres Loh, etc) look very shallowly, academically or unpractical (on research/experimental level, sure, as all other in Haskell). What do you use to decrease bugs in Haskell code? Also will be very interesting real success stories!