2011/9/2 Scott Marlowe <scott.marlowe@xxxxxxxxx>: > On Tue, Aug 30, 2011 at 11:23 AM, Stefan Keller <sfkeller@xxxxxxxxx> wrote: > How big is your DB? > What kind of reads are most common, random access or sequential? > How big of a dataset do you pull out at once with a query. > > SSDs are usually not a big winner for read only databases. > If the dataset is small (dozen or so gigs) get more RAM to fit it in > If it's big and sequentially accessed, then build a giant RAID-10 or RAID-6 > If it's big and randomly accessed then buy a bunch of SSDs and RAID them My dataset is a mirror of OpenStreetMap updated daily. For Switzerland it's about 10 GB total disk space used (half for tables, half for indexes) based on 2 GB raw XML input. Europe would be about 70 times larger (130 GB) and world has 250 GB raw input. It's both randomly (= index scan?) and sequentially (= seq scan?) accessed with queries like: " SELECT * FROM osm_point WHERE tags @> hstore('tourism','zoo') AND name ILIKE 'Zoo%' ". You can try it yourself online, e.g. http://labs.geometa.info/postgisterminal/?xapi=node[tourism=zoo] So I'm still unsure what's better: SSD, NVRAM (PCI card) or plain RAM? And I'm eager to understand if unlogged tables could help anyway. Yours, Stefan -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance