[Geowanking] simple 3D geocode for AR
dcolleen at planet9.com
Fri Aug 28 08:05:53 PDT 2009
I really appreciate the depth and importance of your questions. This is in some regards a question for Michael Jones and the team at Google Earth. GE, started as Keyhole a sister company to Intrinsic the middle ware game engine company. GE was built on the Intrinsic engine and I confirmed with one of their engineers that their proprietary 3D format is still in use. I suspect that over the years, the Intrinsic format has been highly customized to serve a single goal.... serving Earth to millions of users. KML/Collada was added much later as a "bolt on" to support user contributed data. (Note that Keyhole, Intrinsic and Collada were all funded by Sony). Clearly, GE has developed a robust, scalable system and much could be learned from their experience if they would share. Michael has participated in X3D Earth sessions. One interesting comment that he made was the OGC's WMS/WFS were designed in a way that could not support intense user loads.
I will also share my own experience. Our CTO, Chris Nicholas, founded GlobeExplorer prior to joining Planet 9. GX was the largest aggregator of aerial and satellite imagery and needed to supply map tiles at a scale similar to Google. Early on, they used both Oracle and Microsoft servers and found that they could not afford the server licenses. They turned to open source efforts including PostgreSQL for their DB.
We built P9's streaming MU server along similar lines. Over the past year, we have been ramping up to support millions of 3D social networking users. We recently decided to switch from PostgreSQL to MySQL for scaling reasons. Our GeoFeeder server streams 3D data in many formats but primarily X3D to PC clients and MD3DM (mobile direct X binary) to cell phone clients. MD3DM has been used, rather than X3D, because of handset support and tool chain issues. Hopefully, we will serve a unified X3D stream in the future.
We are also the lead architects for the US Navy - Virtual Earth system. NVE's goal is to supply an open standards based system, supporting GIS, A&E and other data types for users in GIS, CAD, security, flight training, etc. NVE is built on OGC's open source Geoserver with an Oracle DB and Web 3D's X3D Earth framework. Our development efforts for the Navy have been contributed back to OGC's source. It has been a great, ongoing challenge to reconcile the needs of AutoCAD and ESRI based users all within a 3D Earth globe framework.
Both NVE and Geofeeder are designed for large user populations but have yet to be fully tested at GE use levels.
Now, coming back to one of your original questions Andrew.... almost all 3D scene graphs are syntactically similar and the resulting file sizes are also very similar. VRML, encoded as X3D (xml) is about 5-10% larger in file size. Encoded as Collada, the file grow by 40% in my tests. This is largely due to each vertex being described in lat long rather than in a local coordinate system. Clearly, GE has successfully used a stripped down version of Collada to display individual building on their globe. I have heard that those who tried to execute entire cities in Collada have had severe scaling issues. Collada was designed as a data storage and exchange format... not for real time use. I suspect that GE uses the Intrisic derived format for large city display. We have been running full city models in X3D for many year and are now doing likewise on iPhones... streamed from GeoFeeder.
As an aside, I think that quad or oct tree tile based Earth displays needed to be replaced by CLOD based systems. We funded VTP to do this but the effort proved to be too large. Recently, I saw the CLODDY platform in Germany. It looks very promising.
I hope this helps.
I agree with most everything that is stated here, and I will confess
that I am not all that familiar with X3D though it looks pretty
well-engineered as such things go at first glance. However, when I
look at the docs -- and standards are on my mind at the moment --
there is a giant question that looms large and for which I cannot
easily find an answer. How well does the protocol design scale for
real workloads? What kinds of scales were considered in its design?
Many elements of it seem biased toward mostly static models.
Consider, for example, workloads where you are supporting a sustained
update rate of tens of millions of geospatial polygons per second to a
single, contiguous earth model with billions to trillions of polygon
records. That is not an unrealistic application by any stretch of the
imagination, but there are many aspects of the protocol design that
while negligible on a small scale seem likely to become expensive when
scaled up. By analogy, consider the evolution of a binary-encoded XML
protocols because real XML lacked properties that would allow it to
scale well for that purpose (so they reinvented ASN.1 wire encoding).
Good for when apps are small-ish, but that order of magnitude
performance penalty adds up when apps get big. I am having a hard time
thinking of *any* protocol that was properly engineered for
scalability in its early releases.
I'm not saying it is not designed to scale well generally, and I am
asking because I am too lazy and short on time to do serious research.
;-) Just how suited is the standard for non-trivial real-time 3-d
geospatial models? Obviously Google Earth comes up very short in this
domain, but is X3D just buying a modest extension of capability or is
it a genuinely robust model that can handle anything thrown at it?
J. Andrew Rogers
More information about the Geowanking