[Geowanking] efficient algorithms for cellular automata?
anselm at gmail.com
Wed Jan 23 11:39:16 PST 2008
Yeah I'm still interested in a pretty large model; as well I'd like to run
multiple variations of that model simultaneously.
Simulating watersheds within say just western Oregon - even in two
dimensions - with contributing factors such as salmon populations, sediment
runoff, slope of land, temperature, water, regional variations in
Granted the ideas should be exercised without waiting to resolve
scalability. To truly scale would probably require splitting the work
across multiple machines anyway; so any initial architecture would probably
But if there was a turnkey library that took care of these chores then I'd
just build on that.
On Jan 23, 2008 11:25 AM, Eric Wolf <ebwolf at gmail.com> wrote:
> Image compression schemes work by detecting repeating spatial patterns. If
> you could do this in a CA simulation, you've probably already solved your
> problem. Other kinds of image compression work by altering the pixel values
> (like JPG) which is completely unacceptable. Unfortunately, those are the
> only ones that are randomly pixel-addressable.
> Really, the best solution is to stick as much RAM as possible in the box.
> You can probably use a moving window to page to disk. But as others have
> implied, you're probably biting off too large of a problem. Try reducing
> your scale and see if what you are interested in still happens!
> On Jan 23, 2008 12:10 PM, Brent Pedersen <bpederse at gmail.com> wrote:
> > On Jan 23, 2008 10:11 AM, Anselm Hook <anselm at gmail.com> wrote:
> > > Thought I'd ask the list this question more directly:
> > >
> > > If you have a large cellular automata; such as say conways-life (or
> > > something with perhaps a few more bits per pixel) - what is an
> > efficient way
> > > to represent this in memory?
> > >
> > > It seems to be similar to compressing an image. There are a variety
> > of
> > > algorithms for compressing images. The goal often seems to be to find
> > > duplicate blocks.
> > >
> > > One constraint is that I want the data to be pixel addressable and
> > speed is
> > > critical since the data-set may be large. The best performance is of
> > course
> > > linear time with no indirection ( pixel = memory[ x + y * stride ] ).
> > >
> > > This is intended to be used to simulate watersheds.
> > >
> > > - a
> > >
> > >
> > > _______________________________________________
> > > Geowanking mailing list
> > > Geowanking at lists.burri.to
> > > http://lists.burri.to/mailman/listinfo/geowanking
> > >
> > hi, i dont know at all how to address your compression question, but
> > re the simulation:
> > if you can model the CA as a convolution, then you can let python do
> > the work via numpy/scipy, specifically scipy.signal.convole2d()
> > e.g:
> > >>> grid = convolve2d(grid, kernel, mode='same', boundary='wrap')
> > even if do need direct per-pixel access, there is excellent support
> > for that in numpy arrays via a number of options:
> > cython/pyrex, weave.inline, or pyinstant are all numpy-aware.
> > this is a good reference:
> > http://www.scipy.org/PerformancePython
> > i dont know what dimensions you'll be dealing with but in my
> > experience, this scales pretty well.
> > -brentp
> > _______________________________________________
> > Geowanking mailing list
> > Geowanking at lists.burri.to
> > http://lists.burri.to/mailman/listinfo/geowanking
> Eric B. Wolf 720-209-6818
> PhD Student CU-Boulder - Geography
> Geowanking mailing list
> Geowanking at lists.burri.to
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Geowanking