November 8, 2009

Fun with Layars

Last night I installed Layar on my phone, and had some fun checking out the twitter and wikipedia layers. So I signed up for an API key, and 30 seconds later saw a tweet mentioning the California Data Camp. Perfect! After a rare and blissful sleep-in, I wandered over to see what was going on at Citizen Space, thinking I'd try get a proof of concept demo showing some City of SF data in Layar.

Turns out, despite a number of interesting conversations taking precedence over my coding, I managed to get a simple demo working, and even win Honorary Mention (and an iPod touch) for my efforts. And a couple of Layars (crime data and handicapped parking spaces) are just waiting for publishing approval from Layar, and will hopefully be available in a few hours. Just search for "datasf" in your Layar app.

Since GeoDjango was the reason I was able to get a mockup going so quickly, I thought I'd just write a few short notes on the steps I took to build the Layar compatible API, and make the code available. Note that the code here is not particularly pretty - it's the result of a partial afernoon of work (including finding/downloading the layers, going over the layar api docs, and dealing with incredibly spotty internet connectivity). Nonetheless it may be interesting to some geodjango newbies as an example of a quick proof of concept.

The homepage is at http://code.google.com/p/geodjango-layar/, and the wiki includes some play-by-play instructions if you're just getting started with this stuff. Enjoy!

[EDIT]: I now see that someone else (sunlight foundation) had already built a more robust generic view for django, called django-layar. You should definitely use theirs instead .. but if you're curious for more ways to load spatial data into geodjango you might find my notes on google code interesting anyway.

October 15, 2009

Tiling Kibera

The upcoming Map Kibera project acquired some imagery recently, and I got ahold of it yesterday to set up a quick tilecache preview. There's actually been quite a few requests here recently for getting some tiles up quickly from various sets of source imagery, so I thought I'd write a few blog posts on some different ways to go about it.

First, I'm assuming the end user will be requesting tiles, and that these tiles will be projected in Spherical Mercator for viewing on the web in a browser like OpenLayers or Google Maps (so I'm skipping over the bits for creating tiles that might be used in a browser like Google Earth). With that in mind, there are a few ways to get your tiles.  Note that the Kibera imagery is a nice simple example, because the area of the imagery is not that large (about 25 square km), and the source file is only a couple hundred megs as an uncompressed TIF.

Option A: Pre-generate all your tiles in advance

The easiest way to generate all your tiles in advance is probably to use the newish MapTiler software, which is a nice graphic interface for the gdal2tiles project.  After installing MapTiler, I just selected my projection, selected my single TIF file (note your source files do not have to match the output projection -- and because my data was a GeoTiff with appropriate metadata, MapTiler automatically figured out the projection info and appropriate transformation by itself), selected my zoom levels and other options, and hit Render.  Because I wanted tiles all the way up to zoom level 18, it took just under 15 minutes.  The output of MapTiler is just awesome - it creates not only the tiles but also sample Google Maps and OpenLayers html, each of which is full of nice features.  I'm impressed (though I'd like to see a CloudMade tile layers or two in the OpenLayers example).

If for any reason MapTiler isn't working out for you, you can also use gdal2tiles directly.  Mano Marks recently wrote a nice tutorial on using gdal2tiles for creating KML superoverlays.  The concept is the same for creating spherical mercator tiles - you just need to change the warping projection (to use EPSG:3785 instead of EPSG:4326) and remove the geodetic option from gdal2tiles and you should be good to go (note that epsg:900913 is equivelent to epsg:3785, and if you do not have one of them in your epsg file, you may need to add it manually)

The source imagery was .6 meters/pixel, and because we're near the equator, tiles at level 18 are close to the scale of the original image.  Going up to zoom 19 added a little viewing clarity, but it took ~4x the space as was required for my level 18 tiles, not to mention the time to render them.  In this case, rendering zoom level 19 alone took over 45 minutes.

Option B:  Generate tiles on demand

Often, you are dealing with a larger dataset than Kibera, and rendering all the tiles might take many hundreds of gigabytes (or much more).  In addition, it's very likely that the vast majority of your tiles will never be requested by any user -- rendering the middle of a 'boring' area up to zoom level 20 is basically a waste of space.  But since you can't be exactly sure which tiles will be requested, you may want to render them on-demand, and then cache those requested tiles under the assumption that if they were requested once, they're more likely to be requested again.  Another reason to do this is time:  It only took 15 minutes to render Kibera up to zoom level 18, but what if you just got imagery for Afghanistan, and you'd like to start looking at the tiles _right now_ instead of waiting overnight (or longer) for the pre-rendering to finish?  One answer is TileCache

A common use-case for TileCache is to put it as middleware between an existing WMS server and the end users.  This works great, but requires you've got a WMS server already configured.  However, TileCache can also read GDAL Data Formats directly, and then spit out the tiles.  To use this, it's important you have both PIL and Numpy installed (along with GDAL and TileCache, of course).  Here's a simple tilecache configuration for creating google-map compatible tiles:

[kibera]
type=GDAL
file=/tmp/kibera.tif
spherical_mercator=true
tms_type=google
metatile=yes

In addition, however, you need to make sure your source data is in the matching spherical mercator projection.  To reproject (or transform) the Kibera imagery, I used this command:

gdalwarp -t_srs epsg:3785 09FEB19_BOOST.tif kibera.tif


Finally, you can also use tilecache_seed to pre-render some or all of the tiles using tilecache itself.  It can be useful for example to seed all but the last couple zoom levels (these will take relatively little disk space) so the first users of the map won't have to wait for tiles to render until they zoom way in to see some detail.

Tips and Tricks

There's a few things you can do to speed up tile generation and lessen the load on your server.  With a small dataset like this, it's not a big deal - but when dealing with bigger data sources, speeding up your render time can mean hours or days of computer time saved.

Transforming your Source Data:  Making sure your source data is in the same projection as your output tiles means more then creating a VRT with the metadata for the projection transformation - it means actually transforming the raw data so it doesn't have to be transformed on the fly during tile creation.  This has to be done for the tilecache option above, but if using MapTiler or gdal2tiles, you may wish to use gdalwarp as noted at the end of the TileCache section above to actually output a new tiff file to use as your source.  The disadvantage of this is that you end up using extra space for the source data while you render, but if you're plan is to pre-render all the tiles then disk space is probably not your concern.  

Creating Overviews: In the Kibera example above, only zoom levels 18 and 19 were near the source dataset resolution.  All of the lower zoom levels could have been rendered more quickly if we had them reading from a more course (downsampled) data source.  Fortunately GDAL ships with a utility to let us create these downsampled "overviews", which will in turn be used by any of the above rendering methods.  To create overviews of my gdal data source I run:

gdaladdo kibera.tif -r 2 4 8 16

I can also add the "-r" parameter to the gdaladdo command which will create a separate overview file rather than incorporate them directly into my source tiff.  Either way, this can potentially speed up rendering time for all but your most detailed zoom levels.

Post Processing:  As mentioned by MapTiler during the tile creation process, you can save half your disk space or more by minimizing the output tile size using PNGNQ.  There's a thread here discussing ways to recurse through all your png files on windows or linux.

June 11, 2009

Featureserver on AppEngine

AppEngine is awesome. The more we use it, the more we like it.

Recently, someone contacted us who needed a site up, in a hurry, to serve up some points on a google map. The catch was there were about 50k points (so it seemed server side clustering might be nice). Also they wanted to be able to serve up at _least_ tens of millions of requests a day. And maybe quite a lot more.

Given the scaling requirements, it seemed like AppEngine might be a nice fit, since then we wouldn't have to worry so much about tons of caching, or ensuring clients made similar bounding box requests, and so forth. And as for the posting/getting of points to/from appengine, we decided to go for using FeatureServer as a base.

If you're not familiar with featureserver, a quick overview: It makes it easy to (amongst other things) post/update your features to some datastore, and pull them out with bounding box and/or attribute queries in a variety of vector formats (kml, json, wfs, etc). Also it not only supports a bunch of different backend datastores (shapefiles, twitter, postgis, flickr, etc.), but it makes creating new ones simple. And, thanks to crschmidt's usual paving-the-way, setting up FeatureServer on AppEngine was trivial.

So there I am with a nice little featureserver running on AppEngine. We set up some cron jobs to do the clustering, and with 50,000 points I run some tests at about 75 queries/second. Everything seems great.

But on further examination, the FeatureServer datastore that currently exists for AppEngine has a couple problems:
* Because it is based on geohash , it uses up your only inequality query on your location (bounding box) search, which means you can't filter on other stuff.
* The geohash implementation it uses has some quirks (but that's for another post)

Fortunately, WhereCamp was on during the time I was thinking about how to solve this, so I was able to ask all kinds of smart people their advice. One of them immediately pointed out to me that a colleague had implemented a clever method for storing points on AppEngine that might just do the trick: GeoModel

And so it was that I gave GeoModel a try, and it did indeed solve the problems I was having with the geohash implementation. On the downside GeoModel currently only works with points, but as that is all this particular project needs, it's not a problem at all. Long story short, I simplified our custom datastore this morning, and committed it to the featureserver codebase. So if you want to very quickly put up a scalable, reasonably robust geo-point datastore, with a restful (sorry, sean) interface, GeoModel on AppEngine might be a good way to go.

February 20, 2009

Walk On

A few months ago, Walkscore.com launched their new API, aimed a Real Estate sites, academic and other largescale studies, and anyone else who might want programmatic access to the walkability of a set of locations.
Recently, a number of large sites (zillow.com, Estately.com, BaseEstate.com, and ColoProperty.com amongst many others) have started to incorporate walkscore in their listings, which has led to the happy situation of ensuring the site can handle the popularity.

When the good folks at Walkscore first contacted me about designing the API, we talked about the likelyhood we'd be serving many, many millions of daily requests shortly after launch. This quickly led into discussions about what framework we wanted to build it on, and how much IT we were interested in taking on. Eventually, we decided to use Google's AppEngine. Given the current, and quickly growing, popularity of the API, I think this ended up being a great choice - no worries about having to even think about spawning new EC2 instances or load balancing, let alone how to best optimize the database and apache and caching configurations.

The rest of this post is about the first quirk we encountered on AppEngine -- I'll probably post a few more items in the future with details about some of my other experiences working on this very fun project.

Quirk 1: Because counting things is not quite as simple on what is basically a key/value datastore as it is in a relational database.

The obvious, and nicely documented, choice for replacing a count(*) type query is using sharded counters to keep track of how many things you have. Unfortunately, the amount of counting that we do in this app is quite a bit: not only all the various items in the datastore, but also what type of request each user makes. And we need to summarize a bunch of these counts reasonably frequently (mostly to ensure users are under quota, but also for various other reports).

Although AppEngine scales super nicely in a lot of areas, there are still limits to how long a request can take before AppEngine decides you are using too many resources of one kind or another, and kills that request. In my testing, as the overall queries/second went up, requests that had multiple datastore reads (let alone writes) took longer and longer -- once we hit about 40QPS, HTTP 500 errors due to requests timing out increased dramatically. This held true while testing even their own sample sharded counter application, with about as simple a model as you can build.

My solution to this, after some more tests and various discussions, was building a (sharded) counter that relies more heavily on memcache - it only writes to the datastore when there have been a few hundred new counts to deal update. This makes for a couple order of magnitudes less datastore chitchat, and does not seem to have caused us any inaccurate counts (which we can test by manually counting each of a certain type of entity, and comparing against the our counter)

So quirk 1 was not a major issue, but a great reminder that you still need to think about scaling, even when building on top a massively scalable infrastructure.