On Fri, 2006-06-09 at 14:41, Jim C. Nasby wrote: > AFAIK, the reason why seperating pg_xlog from the base files provides so > much performance is because the latency on pg_xlog is critical: a > transaction can't commit until all of it's log data is written to disk > via fsync, and if you're trying to fsync frequently on the same drive as > the data tables are on, you'll have a big problem with the activity on > the data drives competing with trying to fsync pg_xlog rapidly. > > But if you have a raid array with a battery-backed controller, this > shouldn't be anywhere near as big an issue. The fsync on the log will > return very quickly thanks to the cache, and the controller is then free > to batch up writes to pg_xlog. Or at least that's the theory. > > Has anyone actually done any testing on this? Specifically, I'm > wondering if the benefit of adding 2 more drives to a RAID10 outweighs > whatever penalties there are to having pg_xlog on that RAID10 with all > the rest of the data. I tested it WAY back when 7.4 first came out on a machine with BBU, and it didn't seem to make any difference HOW I set up the hard drives, RAID-5, 1+0, 1 it was all about the same. With BBU the transactions per second varied very little. If I recall correctly, it was something like 600 or so tps with pgbench (scaling and num clients was around 20 I believe) It's been a while. In the end, that server ran with a pair of 18 Gig drives in a RAID-1 and was plenty fast for what we used it for. Due to corporate shenanigans it was still running pgsql 7.2.x at the time. ugh. I've not got access to a couple of Dell servers I might be able to test this on... After our security audit maybe.