How about either:- a) Size the pool so all your data fits into it. b) Use a RAM-based filesystem (ie: a memory disk or SSD) for the data storage [memory disk will be faster] with a Smaller pool - Your seed data should be a copy of the datastore on disk filesystem; at startup time copy the storage files from the physical to memory. A bigger gain can probably be had if you have a tightly controlled suite of queries that will be run against the database and you can spend the time to tune each to ensure it performs no sequential scans (ie: Every query uses index lookups). On 5 November 2010 11:32, A B <gentosaker@xxxxxxxxx> wrote: >>> If you just wanted PostgreSQL to go as fast as possible WITHOUT any >>> care for your data (you accept 100% dataloss and datacorruption if any >>> error should occur), what settings should you use then? >>> >> >> >> I'm just curious, what do you need that for? >> >> regards >> Szymon > > I was just thinking about the case where I will have almost 100% > selects, but still needs something better than a plain key-value > storage so I can do some sql queries. > The server will just boot, load data, run, hopefully not crash but if > it would, just start over with load and run. > > -- > Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-performance > -- Nick Lello | Web Architect o +1 503.284.7581 x418 / +44 (0) 8433309374 | m +44 (0) 7917 138319 Email: nick.lello at rentrak.com RENTRAK | www.rentrak.com | NASDAQ: RENT -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance