Shaun, 2011/9/2 Shaun Thomas <sthomas@xxxxxxxxx>: > Ironically, this is actually the topic of my presentation at Postgres Open.> Do you think my problem would now be solved with NVRAM PCI card? Stefan ---------- Forwarded message ---------- From: Stefan Keller <sfkeller@xxxxxxxxx> Date: 2011/9/3 Subject: Re: Summaries on SSD usage? To: Jesper Krogh <jesper@xxxxxxxx> Cc: pgsql-performance@xxxxxxxxxxxxxx 2011/9/3 Jesper Krogh <jesper@xxxxxxxx>: > On 2011-09-03 00:04, Stefan Keller wrote: > It's not that hard to figure out.. take some of your "typical" queries. > say the one above.. Change the search-term to something "you'd expect > the user to enter in a minute, but hasn't been run". (could be "museum" > instead > of "zoo".. then you run it with \timing and twice.. if the two queries are > "close" to each other in timing, then you only hit memory anyway and > neither SSD, NVRAM or more RAM will buy you anything. Faster memory > and faster CPU-cores will.. if you have a significant speedup to the > second run, then more RAM, NVRAM, SSD is a good fix. > > Typically I have slow-query-logging turned on, permanently set to around > 250ms. > If I find queries in the log that "i didnt expect" to take above 250ms then > I'd start to investigate if query-plans are correct .. and so on.. > > The above numbers are "raw-data" size and now how PG uses them.. or? > And you havent told anything about the size of your current system. Its definitely the case that the second query run is much faster (first ones go up to 30 seconds and more...). PG uses the raw data for Switzerlad like this: 10 GB total disk space based on 2 GB raw XML input. Table osm_point is one of the four big tables and uses 984 MB for table and 1321 MB for indexes (where hstore is the biggest from id, name and geometry). Stefan -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance