-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Nick Bower wrote: > We're considering using Postgresql for storing gridded metadata - each point > of our grids has a variety of metadata attached to it (including lat/lon, > measurements, etc) and would constitute a record in Postgresql+Postgis. > > Size-wise, grids are about 4000x700 and are collected twice daily over say 10 > years. As mentioned, each record would have up to 50 metadata attributes > (columns) including geom, floats, varchars etc. > > So given 4000x700x2x365x10 > 2 billion, is this going to be a problem if we > will be wanting to query on datetimes, Postgis lat/lon, and integer-based > metadata flags? > > If however I'm forced to sub-sample the grid, what rule of thumb should I be > looking to be constrained by? > > Thanks for any pointers, Nick Tablespaces and table partitioning will be crucial to your needs. I'm not sure if you can partition indexes, though. And too bad that compressed bit-map indexes have not been implemented yet. For indexes with high "key cardinality", they save a *lot* of space, and queries can run a lot faster. - -- Ron Johnson, Jr. Jefferson LA USA Is "common sense" really valid? For example, it is "common sense" to white-power racists that whites are superior to blacks, and that those with brown skins are mud people. However, that "common sense" is obviously wrong. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.5 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFE/Ur7S9HxQb37XmcRAsKLAKDnC36QSzRuaedSsXe+rQp3fbDbOgCfSwlQ ip2em5mEmXF45kek2rHKJvw= =uqTK -----END PGP SIGNATURE-----