2010/5/28 Konrad Garus <konrad.garus@xxxxxxxxx>: > 2010/5/27 Cédric Villemain <cedric.villemain.debian@xxxxxxxxx>: > >> Exactly. And the time to browse depend on the number of blocks already >> in core memory. >> I am interested by tests results and benchmarks if you are going to do some :) > > I am still thinking whether I want to do it on this prod machine. > Maybe on something less critical first (but still with a good amount > of memory mapped by page buffers). > > What system have you tested it on? Has it ever run on a few-gig system? :-) databases up to 300GB for the stats purpose. The snapshot/restore was done for bases around 40-50GB but with only 16GB of RAM. I really thing some improvments are posible before using it in production, even if it should work well as it is. At least something to remove the orphan snapshot files (in case of drop table, or truncate). And probably increase the quality of the code around the prefetch.(better handling of effective_io_concurrency...the prefetch is linerar but blocks requests are grouped) If you are able to test/benchs on a pre-production env, do it :) -- Cédric Villemain 2ndQuadrant http://2ndQuadrant.fr/ PostgreSQL : Expertise, Formation et Support -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance