about me

I am currently based at the Frankfurt Institute for Advanced Studies in Jochen Triesch's neuro group. I am looking at maximising the mutual information of network states over time recurrently connected neural networks. Previously I have been:

I am interested in Neural computation. There are several chapters in my PhD on what I term "Neurally-Inspired Computing", where one performs interesting computation using binary storage, probabilistic computing and message passing. So far I have constructed neurally-inspired versions of Bloom filters, error-correcting codes and associative memories.

Email is the best way to get in touch with me :

Interest in Financial Markets

I believe that volatility in today's financial markets is a natural outcome of the structure of our markets. After some thought I believe I have a new market structure that will have less volatility, and will encourage more accurate pricing of stocks. I have written a short paper containing these ideas here. Thanks to JP Sagoo, Emanuel Derman, Sina Tootoonian and Desirae Vanek for providing valuable feedback regarding the idea and its presentation.

I also wrote an article for a popular audience which was published in the Huffington Post

Disclaimer: The maths presented in this article is very rough and probably contains errors.

PhD dissertation

Here is my PhD dissertation, entitled "Distributed Associative Memory". It goes into more technical details than the related articles I published.

Article preprints

Efficient and robust associative memory from a generalized Bloom filter: This paper creates an efficient associative memory by creating a generalized Bloom filter to represent a set of items. We are able to perform inference over the entire set of stored items in the generalized Bloom filter using standard message-passing techniques. Given an initial cue we are then able to find the closest stored item very quickly. It appears that this network is over twice as efficient as a Hopfield network, even though the network uses only bits, while the Hopfield network uses integers.

Information recall using relative spike timing in a spiking neural network: This paper examines the ability of the network to store point attractors. Given a noisy cue, can the network recover the rest of the pattern? How accurately can the network reproduce individual spikes? What are the tradeoffs between storing many small patterns and a few large patterns?

Miscellaneous academic stuff


After spending some time trying to get it to work I thought I'd share my experience on getting auto-tools, Boost.python and Kdevelop to play together. Here is a simple tutorial that should work for you!

Theory of Computing

Here are some notes for a 14 lecture course I gave on the Theory of computing (@ Rhodes University). The course was aimed at a 2nd year level for students with a weak math background (as seems to be far too common in SA). I'm also including Practical 1 and Practical2.pdf, please let me know if you find any of it useful. (I also have the solutions available for interested lecturers, just get in touch.) (Also I've been told fig 3.2 is a little misleading.)

Reinforcement Sailing

For my masters (at the university of Edinburgh informatics department. My dissertation looked at applying reinforcement learning to the continuous state, continuous action problem of sailing (in computer simulation). The results turned out rather well (and was awarded a departmental prize); the learnt solution sailed better than I could tune the model by hand. I also picked out the interesting bits of my thesis and turned it into an unpublished article.


I have also wrote a short Tetris learning agent, which can perform rather well, for what it is given. It uses value iteration to learn the value of fitting different pieces in a small tetris well. It can then use that learning to play a game. The training takes less than a second and the agent is able to complete (roughly) 100 rows. The agent ends up dying since it doesn't know how to uncover holes it introduces. I shall put the code up sometime in the near future, since the code needs tidying before it is suitable for public consumption.

Personal Stuff

In 2006, just before I came to Cambridge I built a radio-controlled yacht which was lots of fun. Photos of it are available.

You are here: