On Thu, Oct 19, 2006 at 06:39:22PM +0200, Tobias Brox wrote: > [Jim C. Nasby - Thu at 11:31:26AM -0500] > > Yeah, test setups are a good thing to have... > > We would need to replicate the production traffic as well to do reliable > tests. Well, we'll get to that one day ... Marginally reliable tests are usually better than none at all. :) > > The issue with pg_xlog is you don't need bandwidth... you need super-low > > latency. The best way to accomplish that is to get a battery-backed RAID > > controller that you can enable write caching on. > > Sounds a bit risky to me :-) Well, you do need to understand what happens if the machine does lose power... namely you have a limited amount of time to get power back to the machine so that the controller can flush that data out. Other than that, it's not very risky. As for shared_buffers, conventional wisdom has been to use between 10% and 25% of memory, bounding towards the lower end as you get into larger quantities of memory. So in your case, 600M wouldn't be pushing things much at all. Even 1G wouldn't be that out of the ordinary. Also remember that the more memory for shared_buffers, the less for sorting/hashes/etc. (work_mem) -- Jim Nasby jim@xxxxxxxxx EnterpriseDB http://enterprisedb.com 512.569.9461 (cell)