[Geowanking] comments on the touchidot video
mnl at well.com
Wed Sep 17 16:54:06 PDT 2008
This is an interesting mock-up, a little more refined than the the
enkin.net hack . Here's a few quick of thoughts:
1. Google has already released mobile street view ( see
) AR geotags are sure to follow. They're already paritally supported by
2. Creating an image DNS is a great idea, and necessary for 3D
geopositioning by matching viewable points with a pointcloud database.
perhaps achievable by an OSM style process. in the meantime
experimenting with existing data seems like a path forward.
Earthmine.com has offered the hacker community experimental access to
their very detailed database of 3D pointclouds for cities in the western
US ( not pdx, alas) they claim 20cm/pixel 3D location accuracy.
3. There's enormous work ahead working out a usable UI for sorting
though what will be abundant visible data and media draped across the
real-world, viewable through our mobile viewfinders.
4. So far there is no standard markup for KML placemark docs. - they
are NOT, but should be, fully functional web objects. Yah I know you
can insert html, but can you implement AJAX in a KML placemark?
4. We better move quick, The Handheld AR meme is in the air. The
Tonchidot demo demonstrates, that the mainstream techcruch VC crowd is
now dialed and very stoked by the possibilties the viewfinder as the
Anselm Hook wrote:
> I must say that although it is expected, and pointed out earlier in
> many of our respective social cartography groups, and clearly heads
> and shoulders above the other offerings in the space - it is so well
> done that it is almost hard to even be jealous:
> Reminds me quite a bit of the hands on siggraph demo's I used to try
> way back in the last century before it was quite clear that the
> gibsonian future was the baseline. The siggraph demos did very
> similar things - you would put on a bulky heads up display and all of
> a sudden your real world would be instrumented with digital media -
> pinned to a cumbersome qr-code instead of just brute force image
> recognition using SIFT or something as this one appears to do - but it
> was still pretty freakishly cool at the time....
> Here - in this demo - it is just so sweetly cute to see that these
> folks 'get it' and skipped ahead of the tedium of having to 'click' or
> engage in a complicated negotiation to get the instrumented reality
> display up....
> It would be so cool if we could inject our own perception into that
> reality - if we could have a kind of image dns... as mentioned previously.
> To go into rant mode for a second -> basically these people are
> recolonizing our fucking reality... They're the gatekeepers on the
> new way that people will SEE. That's a crazy power. I want me some
> of that. :-)
> - me
> Geowanking mailing list
> Geowanking at lists.burri.to
More information about the Geowanking