On Tue, Nov 10, 2009 at 9:52 AM, Laurent Laborde <kerdezixe@xxxxxxxxx> wrote: > On Tue, Nov 10, 2009 at 5:35 PM, Greg Smith <greg@xxxxxxxxxxxxxxx> wrote: >> disks (RAID1) are the two WAL setups that work well, and if I have a bunch >> of drives I personally always prefer a dedicated drive mainly because it >> makes it easy to monitor exactly how much WAL activity is going on by >> watching that drive. I do the same thing for the same reasons. > On the "new" slave i have 6 disk in raid-10 and 2 disk in raid-1. > I tought about doing the same thing with the master. It would be a worthy change to make. As long as there's no heavy log write load on the RAID-1 put the pg_xlog there. >> Generally if checkpoints and archiving are painful, the first thing to do is >> to increase checkpoint_segments to a very high amount (>100), increase >> checkpoint_timeout too, and push shared_buffers up to be a large chunk of >> memory. > > Shared_buffer is 2GB. On some busy systems with lots of small transactions large shared_buffer can cause it to run slower rather than faster due to background writer overhead. > I'll reread domcumentation about checkpoint_segments. > thx. Note that if you've got a slow IO subsystem, a large number of checkpoint segments can result in REALLY long restart times after a crash, as well as really long waits for shutdown and / or bgwriter once you've filled them all up. >> You never want to use LVM under Linux if you care about performance. It >> adds a bunch of overhead that drops throughput no matter what, and it's >> filled with limitations. For example, I mentioned write barriers being one >> way to interleave WAL writes without other types without having to write the >> whole filesystem cache out. Guess what: they don't work at all regardless >> if you're using LVM. Much like using virtual machines, LVM is an approach >> only suitable for low to medium performance systems where your priority is >> easier management rather than speed. > > *doh* !! > Everybody told me "nooo ! LVM is ok, no perceptible overhead, etc ...) > Are you 100% about LVM ? I'll happily trash it :) Everyone who doesn't run databases thinks LVM is plenty fast. Under a database it is not so quick. Do your own testing to be sure, but I've seen slowdowns of about 1/2 under it for fast RAID arrays. >> Given the current quality of Linux code, I hesitate to use anything but ext3 >> because I consider that just barely reliable enough even as the most popular >> filesystem by far. JFS and XFS have some benefits to them, but none so >> compelling to make up for how much less testing they get. That said, there >> seem to be a fair number of people happily running high-performance >> PostgreSQL instances on XFS. > > Thx for the info :) Note that XFS gets a LOT of testing, especially under linux. That said it's still probably only 1/10th as many dbs (or fewer) as those running on ext3 on linux. I've used it before and it's a little faster than ext3 at some stuff, especially deleting large files (or in pg's case lots of 1G files) which can make ext3 crawl. -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance