Re: limiting performance impact of wal archiving.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Laurent Laborde wrote:
It is on a separate array which does everything but tablespace (on a
separate array) and indexspace (another separate array).
On Linux, the types of writes done to the WAL volume (where writes are constantly being flushed) require the WAL volume not be shared with anything else for that to perform well. Typically you'll end up with other things being written out too because it can't just selectively flush just the WAL data. The whole "write barriers" implementation should fix that, but in practice rarely does.

If you put many drives into one big array, somewhere around 6 or more drives, at that point you might put the WAL on that big volume too and be OK (presuming a battery-backed cache which you have). But if you're carving up array sections so finely for other purposes, it doesn't sound like your WAL data is on a big array. Mixed onto a big shared array or single dedicated disks (RAID1) are the two WAL setups that work well, and if I have a bunch of drives I personally always prefer a dedicated drive mainly because it makes it easy to monitor exactly how much WAL activity is going on by watching that drive.

Well, actually, i also change the configuration to synchronous_commit=off
It probably was *THE* problem with checkpoint and archiving :)
This is basically turning off the standard WAL implementation for one where you'll lose some data if there's a crash. If you're OK with that, great; if not, expect to lose some number of transactions if the server ever goes down unexpectedly when configured like this.

Generally if checkpoints and archiving are painful, the first thing to do is to increase checkpoint_segments to a very high amount (>100), increase checkpoint_timeout too, and push shared_buffers up to be a large chunk of memory. Disabling synchronous_commit should be a last resort if your performance issues are so bad you have no choice but to sacrifice some data integrity just to keep things going, while you rearchitect to improve things.

eg: historically, we use JFS with LVM on linux. from the good old time
when IO wasn't a problem.
i heard that ext3 is not better for postgresql. what else ? xfs ?
You never want to use LVM under Linux if you care about performance. It adds a bunch of overhead that drops throughput no matter what, and it's filled with limitations. For example, I mentioned write barriers being one way to interleave WAL writes without other types without having to write the whole filesystem cache out. Guess what: they don't work at all regardless if you're using LVM. Much like using virtual machines, LVM is an approach only suitable for low to medium performance systems where your priority is easier management rather than speed.

Given the current quality of Linux code, I hesitate to use anything but ext3 because I consider that just barely reliable enough even as the most popular filesystem by far. JFS and XFS have some benefits to them, but none so compelling to make up for how much less testing they get. That said, there seem to be a fair number of people happily running high-performance PostgreSQL instances on XFS.

--
Greg Smith    greg@xxxxxxxxxxxxxxx    Baltimore, MD

--
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux