[Geowanking] efficient algorithms for cellular automata?

Brent Pedersen bpederse at gmail.com
Wed Jan 23 11:10:51 PST 2008


On Jan 23, 2008 10:11 AM, Anselm Hook <anselm at gmail.com> wrote:
> Thought I'd ask the list this question more directly:
>
> If you have a large cellular automata; such as say conways-life (or
> something with perhaps a few more bits per pixel) - what is an efficient way
> to represent this in memory?
>
> It seems to be similar to compressing an image.  There are a variety of
> algorithms for compressing images.  The goal often seems to be to find
> duplicate blocks.
>
> One constraint is that I want the data to be pixel addressable and speed is
> critical since the data-set may be large.  The best performance is of course
> linear time with no indirection ( pixel = memory[ x + y * stride ] ).
>
> This is intended to be used to simulate watersheds.
>
>  - a
>
>
> _______________________________________________
> Geowanking mailing list
> Geowanking at lists.burri.to
> http://lists.burri.to/mailman/listinfo/geowanking
>

hi, i dont know at all how to address your compression question, but
re the simulation:

if you can model the CA as a convolution, then you can let python do
the work via numpy/scipy, specifically scipy.signal.convole2d()
e.g:
>>> grid = convolve2d(grid, kernel, mode='same', boundary='wrap')


even if do need direct per-pixel access, there is excellent support
for that in numpy arrays via a number of options:
cython/pyrex, weave.inline, or pyinstant are all numpy-aware.
this is a good reference:
http://www.scipy.org/PerformancePython

i dont know what dimensions you'll be dealing with but in my
experience, this scales pretty well.

-brentp



More information about the Geowanking mailing list