Skip to content

Featured Posts

Why I switched from component-based game engine architecture to functional reactive programming

Components have become pretty popular these days and I'd like to share some issues we had with them in our game projects. I was experimenting with component-based game engine architectures for 2 years and eventually stumbled upon functional reactive programming (FRP) to solve some of the issues of game-object components. ...

Read More

Dataflow diagram of Yampa reactimate

download diagram (.svg), fonts (cmr10.ttf, cmtt10.ttf) For me as a Haskell beginner the biggest problem in understanding Yampa reactimate was how the objects are actually passed around and transformed as all the signatures are very, very... very generic. This diagram shows an example scenario staring Pacman (the player), a cherry (enemy ...

Read More

Aeon Racer mobile game

Today our little indie game start-up called Modern Alchemists released the first "bigger" game Aeon Racer for iPhone and Android! Check it out at (I declare an interest) I really do like the game mostly because of the controls. You really have to have fast reflexes to dodge the gates and I ...

Read More

FRP - Robotroannah Trailer

Robotroannah is a clone of the Robotron 2048 computer game. It uses functional reactive programming (FRP) with Haskell/Yampa and the hsSDL subsystem. FRP allows the functionality of a game to be connected in an incredible flexible and reusable way, resulting in about 250 lines of game specific code. Credits: FRP: ...

Read More

Profiling an interactive simulation in Haskell – part 1


This post is a feasibility test for developing bigger computer games using functional reactive programming (a.k.a. functional temporal programming) with the Yampa library. I use GHC performance profiling on a small interactive computer game simulation which implements animation, rendering, collision detection, artificial intelligence and game object management on a very basic level (see Robotroannah video). The focus is however on Yampa specific code, i.e. animation, AI and object management logic. The game will be scaled up to several hundred objects with different behaviours where one frame has to be calculated on a “reasonable speed”. I don’t want to make it just run “smooth enough overall” because physics and rendering will be replaced in future anyway.


  1. “You can’t optimize what you can’t measure”
  2. Define goals (60 fps, profile only module X etc.) and simplify to the necessary parts.
  3. Understand time profiling, heap profiling and cost centers.
  4. Create reproducable conditions in interactive programs.
  5. Compile with ghc --make myprog.hs -o myprog -prof -O2 -fforce-recomp
  6. Use -auto, -auto-all or -caf-all to automatically add additional “interesting” cost-centers. I prefer -auto as the other produce lots of spam.
  7. Use -fforce-recomp compiler flag when adding {-# SCC "COST_CENTER_NAME" #-}.
  8. Performance profile with ./myprog +RTS -p. Open
  9. Heap profile with ./myprog +RTS -h; hp2ps myprog.hp. Open
  10. End all unnecessary processes to produce stable conditions and profile several times to get stable results.
  11. Always document conditions and code changes in profiling reports.
  12. Always profile with -O2 compiler options. Otherwise you may optimize the wrong program parts.

Lessons learned: Even if Haskell is still in heavy development and it’s okay to question the profiler, investigate it’s strange claims first and see if it resolves the bottlenecks. If it does: good :). If not, question the tool.

Open questions

  • How can I define the filename of profiling output? (possibly with > shell operator)
  • How can I automatically numerate files?
  • How can I guarantee that certain code parts are always evaluated? (possibly with strictness)
  • How can I sort by constant and linear heap consumption in hp2ps?

In GHC 7 the -rtsopts compiler flag is used to activate runtime options. I found a bug in profiling however and had to switch back to GHC 6.12.1. Bug report: bug report “<<loop>> when compiling with -O option with ghc-″


Status quo

The original version of the computer game “Robotroannah” implements simple animation (animateLoop, animateLoopFrame), AI (knightObj), physics (observeTypeCollisions) and object management (createReq, deleteReqs). The rendering takes 3 render steps (legs, body, head) with a fixed-frame rate of 60Hz using SDL.delay to “sleep away” available frame time (1000/60 = 16.67ms). Spritesheet resources are loaded once but passed to each object every cycle(!) right now. Object management logic uses the dpSwitch from The Yampa Arcade.


  • First performance profiling
  • First heap profiling
  • Investigate heap consumption
  • Profile 100 mines in isolation (animation only)
  • Profile 100 knights in isolation (animation + AI)
  • Profile object management in isolation

Performance profiling 1

In performance monitoring we know that “you can’t optimize what you can’t measure” and that “we must not make any assumptions about performance bottlenecks”… and I have a good real world example for this: I implemented a resource manager to avoid passing the spritesheet resources to every object as I assumed it would take alot of memory which later turned out not to be a problem at all (which in turn I assume because of laziness).

First I implemented a frame counter which sufficed as a simple indicator for slow performance. The game was indeed slow and took 18ms to render a single frame with only 3 objects(!) (target is 16.67ms).

In order to create reproducable results I had to create some sort of “user-input recording” to replay the same simulation but I eventually just used “no input at all” for now and automatically killed the simulation after exactly 10 seconds. I also removed the “sleep away available frame time”. Thus the problem of “profiling an interactive program” turned into a problem of “profiling of rendering 6000 frames as-fast-as-possible” (60fps * 10sec = 6000).

To simplify the scene I removed the player object and increased the number of objects to 100 “mine objects”. I also replaced the object management dpSwitch with a parB to exclude performance impacts from object creation and deletion. This may produce an artificial situation however as Haskell may do some optimization (like caching) which will not happen when there are different objects.

    Mon Jan  3 19:53 2011 Time and Allocation Profiling Report  (Final)

       robotron +RTS -p -RTS

    total time  =        5.76 secs   (288 ticks @ 20 ms)
    total alloc = 501,932,352 bytes  (excludes profiling overheads)

COST CENTRE                    MODULE               %time %alloc

outputSDL                      Venice.Subsystem.SDL  57.3   26.6
observeTypeCollisions          Venice.Process        12.2   11.0
animateLoop                    Venice.Process         7.6   16.3
animateLoopFrame               Venice.Process         7.3    8.3
mineObj                        Robotron.Objects       4.9   14.1
main                           Main                   3.5    4.0
core                           Main                   1.7    8.4
playerObj                      Robotron.Objects       1.4    1.2
elemsIL                        Venice.IdentityList    1.4    4.3
body                           Robotron.Types         1.0    0.3


  • 60% spent in rendering which is plausible. I can ignore rendering, but I added more cost-centers to further investigate just out of curiosity.
  • 15% spent in physics which is also plausible and I also can ignore.
  • 15% spent in animation which is strange I guess.
  • 500MB total allocation with 6000 frames! What the heck?

Heap profiling 1

(Nevermind the inconsistencies of the date and objects please! I just forget to keep the original reports but they look similar.)

As you can see there are some space leaks as the heap consumption grows lineary even though no objects are added. I tested animateLoop and animateLoopFrame in isolation which turned out to have constant consumption. These space leaks actually took me quite some time to track down. What I should have done actually is focus on the calling function – mineObj (or cometObj). I originally assumed that it only does some initialization and assignment which couldn’t have any impact on the performance. This however contradicted with the profiling report (5% spent in mineObj). I later indeed found the cause in mineObj: Every frame I do a conversion from spritesheet space to SDL rendering space, which actually could be done once-and-forall in the initialization step. The new heap profile looks far better.

The new performance profile still looks the same however :/

    Sun Jan  9 21:39 2011 Time and Allocation Profiling Report  (Final)

       robotron +RTS -p -RTS

    total time  =        6.92 secs   (346 ticks @ 20 ms)
    total alloc = 496,279,440 bytes  (excludes profiling overheads)

COST CENTRE                    MODULE               %time %alloc

outputSDL                      Venice.Subsystem.SDL  64.7   30.8
animateLoop                    Venice.Process         8.4   18.9
observeTypeCollisions          Venice.Process         6.9    1.2
mineObj                        Robotron.Objects       6.4   15.6
animateLoopFrame               Venice.Process         5.8    9.5
main                           Main                   2.6    4.9
core                           Main                   2.0   10.1
body                           Robotron.Types         1.2    0.4
output                         Main                   0.3    1.1
elemsIL                        Venice.IdentityList    0.0    5.2

A mystery I will resolve another time…


Tagged with , , , .

Pretty printing Haskell code in Latex

This is a small code snippet for Latex which defines some rewriting rules for some operators to generate pretty printed Haskell code. It converts symbols like <- or `elem` into real mathematical symbols and also supports some Monad and Arrow combinators. It’s not complete though. Here is an example output:

You may have to install the “MnSymbol” package which is used for the Arrow combinator -<. It is different from the other Arrow symbols though but it’s the closest one I found the Latex symbol reference.

Update: GHC uses ↢ and ↣ for arrows.

      \csname lst@SetFirstLabel\endcsname}
    {\csname lst@SaveFirstLabel\endcsname}
      literate={+}{{$+$}}1 {/}{{$/$}}1 {*}{{$*$}}1
}1 %{/=}{{$\neg$}}1
               {>}{{$>$}}1 {<}{{$<$}}1 {\\}{{$\lambda$}}1
               {++}{{$+\!\!\!+$}}1 {::}{{$:\!\!\!:$}}1
               {->}{{$\rightarrow$}}2 {>=}{{$\geq$}}2 {<-}{{$\leftarrow$}}2
               {<=}{{$\leq$}}2 {=>}{{$\Rightarrow$}}2
               {\ .\ }{{$\circ$}}2 {(.)}{{($\circ$)}}2
               {<<}{{$\ll$}}2 {>>}{{$\gg$}}2 {>>=}{{$>\!\!>\!\!=$}}2
               {<<<}{{$\lll$}}2 {>>>}{{$\ggg$}}2 {-<}{{$\leftY$}}1 {^<<}{{$\hat{}\!\!\ll$}}2 {^>>}{{$\hat{}\!\!\gg$}}2
               {not\ }{{$\neg$}}1

Tagged with , .

AFRP Reference

I found a reference sheet in the FRVR fork of Yampa in the postscript format which I found quite useful and converted it to PDF. Download AFRP Reference.pdf!

Tagged with , .

Installing Haskell profile libraries

Profiling with GHC is done by compiling with the -prof parameter, which compiles the program with profile code that can be activated by executing the program with ./myprogram +RTS -p -RTS. This creates log file called

$ ghc -prof --make -O2 myfile.hs
$ ./myfile +RTS -p -RTS
$ cat

Unfortunately this doesn’t work out-of-the-box. You have to install the profiling versions of all libraries and all depending libraries. And even more unfortunately, cabal doesn’t resolve dependencies for profiling libraries.

$ cabal install --reinstall -p libraryname

Update: To automatically install the profiling libraries for future installations edit the ~/.cabal/config and set the profiling options to True. (thanks Erik)

Update2: You may only do this for libraries. I got leksah-server complaining about -N2 option not being available with profiling executable.

But the reinstallation produces errors like:

    The export item `Permissions(Permissions, readable, writable,
                                 executable, searchable)'
    attempts to export constructors or class methods that are not visible here
cabal: Error: some packages failed to install:
MissingH- depends on haskell98- which failed to install.
haskell98- failed during the building phase. The exception was:
ExitFailure 1


>>     Could not find module `Data.List.Utils':
>>       Perhaps you haven't installed the profiling libraries for package
>> missingh-
>>       Use -v to see a list of the files searched for.

I also tried removing and recompiling some packages completely like so

$ ghc-pkg unregister MissingH    # case-sensitive
$ cabal unpack MissingH
$ cd MissingH-
$ cabal configure -prof
$ cabal build
$ cabal install

There are some fundamental libraries which cannot be installed via cabal (like parsec3), so you rather have to:

$ sudo apt-get install libghc6-*-prof

But eventually I found out that my packages are broken anyway (check with ghc-pkg check) and I had to reinstall Haskell platform, which also didn’t work because:

$ sudo apt-get purge ghc6    # didn't work

Just do:

$ mv ~/.ghc6 ~/.ghc6.bak
$ sudo apt-get purge ghc6
$ sudo apt-get install --reinstall haskell-platform

Some links which were quite helpful: – Profiling
Stackoverflow – Mysterious cabal install problems

I will publish the profiles as soon as I get a new power supply, hmpf :/

Tagged with .

FRP – Robotroannah Trailer

Robotroannah is a clone of the Robotron 2048 computer game. It uses functional reactive programming (FRP) with Haskell/Yampa and the hsSDL subsystem. FRP allows the functionality of a game to be connected in an incredible flexible and reusable way, resulting in about 250 lines of game specific code.

FRP: Yampa
Sprites: Ball Bullet Gun, Mercanaries, sprite lib
Music: Bo Mellberg, J.K. – Dawn Beyond Ub-11

My work was the whole functionality of the game.

Tagged with , , , , .

Cheap do-it-yourself bookscanner

Scanning a whole book with a normal flatbed scanner is a very tedious task. A better way is to just take photos of the pages with macro mode activated which gives at least very human-readable results. Automatic text conversion software (OCR) is having a hard time however reading the lines correctly, thus a lot of extra manual work is necessary to prepare scanned book for re-publishing. The biggest problem are the non-flat pages and my first solution was to push the pages down with a piece of glass and shoot photos from a tripod. In the do-it-yourself bookscanner forum they came up with a more sophisticated solution called “the standard model” which usually looks something like this:

The images are taken from two opposite sides. The glass “box” pushes the book very evenly and produces perfect images. One point to keep in mind when building a book scanner though is that the middle of the book is moving along as more pages are flipped. A colleague and I built the following prototype which is a very simple and cheap build we came up with (10€ for wood+screws+20€ for plastic glass):

It has movable parts to hold the book but can be fixated with little wing-screws. The arms for the digicam just use friction for fixation. After doing some test shots I noticed that all that was needed are flat images, so if I shoot the book from the middle with only camera I still get flat images which are just uniformly distorted to the back. Thus finally I came up with…

* from german “Rammbock” (battering ram)… in case you didn’t get the pun ;)

You actually move the whole scanner and “ram” the book into a corner. The pages are shot with one camera only which snaps both sides. The little carton shield in the middle is just to avoid specular highlights. The quality is actually pretty good with fulltext pages (10 Megapixel). Finereader 10 gets confused with dewarping though if there is very few text on the page (like the TOC page) so I have to remove the distortion first. I’m very proud of this model as it is very very cheap and only uses one camera. You may also find a thread of my build in the forum.

Lessons learned for next build:

  1. Avoid scratches in glass!
  2. Use a real camera mount! 1. Camera really gets fixated; 2. Metal screws won’t destroy mount hole :/
  3. Use a thinner bottom => Text at bottom won’t get covered
  4. Use a spotlight shield (and use thin wood instead of carton if possible, because it’s more stable) :)
  5. Use black or white wood => no paper needed to cover the ram to avoid blotches
  6. Buy power supply and >512 MB card for camera

The book in the image is called “Against Intellectual Monopoly” and is about patent laws: available at Amazon or as free ebook.

Tagged with .

HOGRE 0.1.0 on Ubuntu 10.10

Antti Salonen was so kind to guide me through the HOGRE installation. This is an installation guide for HOGRE on Ubuntu:

If you already have libogre-dev installed check that you have OGRE version 1.7.1

$ pkg-config --modversion OGRE

If not, add the following repository to the apt sources and install the OGRE SDK:

sudo apt-add-repository ppa:ogre-team/ogre
sudo apt-get install libogre-dev

(from OGRE3D SDK Downloads -> OGRE 1.7.1 Ubuntu PPA)

To install HOGRE using cabal do:

cabal update
cabal install cgen
export PATH
cabal install hogre
cabal install hogre-examples

Try the HOGRE examples with:

cd ~/.cabal/share/hogre-examples-0.1.0

You should see something like this:

Thank you Antti Salonen for your help and HOGRE!
Official HOGRE website, HOGRE API

Tagged with .

Diagrams of Yampa switches

I handed in my thesis the other day. Here are the diagrams of all Yampa switches which hopefully are self-explaining. Note that each of them comes in 2 different flavors: immediate and delayed. In the end there is always a function (k) which does the actual switching (dashed line and white diamonds), thus switching can be interpreted as redirecting the inputs to the new signal function (a black-box because we only know the input and output types).


  • ? t: Yampa.Event carrying a type t
  • C-shaped arrow: continuation
  • box-shaped arrow: unbox from event

download fonts (CMU Classic Serif, CMU Typewriter Text)


switch :: SF in (out, Event t)
       -> (t -> SF in out)
       -> SF in out

download Yampa_switch.svg


rSwitch :: SF in out
        -> SF (in, Event (SF in out)) out

download Yampa_rSwitch.svg


kSwitch :: SF in out
        -> SF (in, out) (Event t)
        -> (SF in out -> t -> SF in out)
        -> SF in out

download Yampa_kSwitch.svg


pSwitchB :: Functor col
         => col (SF in out)
         -> SF (in, col out) (Event mng)
         -> (col (SF in out) -> mng -> SF in (col out))
         -> SF in (col out)

download Yampa_pSwitchB.svg


pSwitch :: Functor col
        => (forall sf. (in -> col sf -> col (ext, sf)))
        -> col (SF ext out)
        -> SF (in, col out) (Event mng)
        -> (col (SF ext out) -> mng -> SF in (col out))
        -> SF in (col out)

download Yampa_pSwitch.svg


rpSwitchB :: Functor col
          => col (SF in out)
          -> SF (in, Event ( col (SF in out)
                          -> col (SF in out)))
                (col out)

download Yampa_rpSwitchB.svg


rpSwitch :: Functor col
         => (forall sf. (in -> col sf -> col (ext, sf)))
         -> col (SF ext out)
         -> SF (in, Event ( col (SF ext out)
                         -> col (SF ext out)))
               (col out)

download Yampa_rpSwitch.svg

Tagged with , .

Why I switched from component-based game engine architecture to functional reactive programming

Components have become pretty popular these days and I’d like to share some issues we had with them in our game projects. I was experimenting with component-based game engine architectures for 2 years and eventually stumbled upon functional reactive programming (FRP) to solve some of the issues of game-object components. I hope to provide some arguments from an objectoriented programming (OOP) viewpoint to why I think FRP helps to write more reusable code.

The argument for component-based game-engines usually starts like this: Game-objects should be represented like objects in reality but as the project progresses however, class hierarchies become more and more complex and thus should be split into small, reusable components. Game developers like abstract and re-usable game-engines after all. The following diagram illustrates such a monolithic class hierarchy (adopted from the book Game Engine Architecture):

Game-object components address these issues by reducing game-objects to identifiable containers of components, where each component encapsulates some reusable functionality and automatically communicates with other components. A component could be something like: a position, player movement, 3D model, health-points and so on. For the communication of components in our engine we used messages, events or let them directly search for components implementing a specified interface, which is illustrated in the following diagram:

In component-based architecture we were mostly concerned about the component intercommunication, like the player movement for example: Should the Mover-component manipulate the position directly or send movement-messages? Or should the Position-component listen to movement-events? What if we want the player movement to behave just a little differently, like being a little random? Do we implement a new RandomMover-component, or do we need a new component,… again,… just to encapsulate the random function into a component? And how does this fit into the automatic Position-Mover-communication?

In the book Component-based Software Engineering (CBSE) they actually define a component as:

A software component is a software element that conforms to a component model and can be independently deployed and composed without modification according to a composition standard. [...] A component model defines specific interaction and composition standards. [...]

In plain English: “In order to allow a new component to be added to an existing environment of components in a game-object and automatically communicate with each other, the communication protocol between the components has to be predefined“. I think the component-based game engine literature mixes-up “combining existing functionality to define game-specific logic” (= reusing functionality) and “automatic communication within unknown game-objects” (= dynamic functionality). Automatic communication is restricted to the known components and new functionality always has to be game-specific (either hard-coded, or in scripts, or any other data-driven method). Even with components you define the player game-object specifically at some point: Player = PositionComponent + VisualComponent + InputComponent.

In FRP everything is based upon time, thus time should be abstracted away into “time dependent functions” (behaviors). A Mover-”component” should just produce a translation vector over time from user-input — nothing more! And if we want it to be a bit random, we just compose the output with the random function in the game-object definition. A pure mathematical function is the most isolated, self-existing and reusable component you can get, as it depends (and only depends!) on the direct input. There is no need the encapsulate a function into a component… which is then managed by a component container (game-object) and defines a lot of messages and events to get the data to the right point.

I’m arguing that game-objects are always game-specific but with time-dependent functions you just have to combine the existing functionality in the right way for every specific game-object. For example, let the movement of a game-object by calculated by: the initial position + some translation over time from user-input + a random factor + … and so on. You may then compose the movement behavior into more complex behaviors (or complete game-objects if you like) in every way you can imagine: A movable, static image game-object? MovingImage = translation function + draw image function. A movable, animation game-object? Certainly! MovingAnimation = same translation function + draw animation function. Or just a static animation perhaps? StaticAnimation = position + draw animation function.

I’m not going into more detail about FRP for now, you can find more information on my blog or on the internet. Component-based software engineering can still be applied in this environment on higher level, like the communication between subsystems or game-objects, but not on the level of game-object functionality level! If you still think you want to implement game-object components, ask yourself the following questions:

  • How can different functionality be defined in terms of relative complex, non-elementary components?
  • What are the most elementary components?
  • In which way are elementary components different from pure functions?
  • How can existing components automatically communicate with new messages from new components?
  • What is the point of ignoring a message that a component doesn’t know of?
  • What happened to the input-process-output model after all?

Tagged with , , , .

FRVR fork of Yampa

I read the paper Dynamic, Interactive Virtual Environments the other day. In section 8.3 Kristopher J. Blom writes about his extension to Yampa called FRVR:

The code structure was improved by:
* separating out the system internals into an AFRPInternals module,
* separating out the switch functionalities into their own module, AFRPSwitches, and
* sorting the remaining code into the proper places.

, advanced developers can explicitly import the internals module when extension of the functionality of Yampa is required.

I don’t know which version of Yampa FRVR uses but maybe this package me to extend reactimate to a full gameloop including resources without extending the Yampa code itself.

The paper and FRVR are part of the DIVE project.

Tagged with .