David Lang wrote:
the application can' tell the difference, but the reason for seperating
them isn't for the application, it's so that different pieces of
hardware can work on different things without having to bounce back and
forth between them.
useing the same drives with LVM doesn't achieve this goal.
the problem is that the WAL is doing a LOT of writes, and postgres waits
until each write is completed before going on to the next thing (for
safety), if a disk is dedicated to the WAL then the head doesn't move
much. if the disk is used for other things as well then the heads have
to move across the disk surface between the WAL and where the data is.
this drasticly slows down the number of items that can go into the WAL,
and therefor slows down the entire system.
this slowdown isn't even something as simple as cutting your speed in
half (half the time spent working on the WAL, half spent on the data
itself), it's more like 10% spent on the WAL, 10% spent on the data, and
80% moveing back and forth between them (I am probably wrong on the
exact numbers, but it is something similarly drastic)
Yeah, I don't think I was clear about the config. It's (4) disks setup
as a pair of RAID1 sets. My original config was pgsql on the first RAID
set (data and WAL). I'm now experimenting with putting the data/pg_xlog
folder on the 2nd set of disks.
Under the old setup (everything on the original RAID1 set, in a
dedicated 32GB LVM volume), I was seeing 80-90% wait percentages in
"top". My understanding is that this is an indicator of an overloaded /
bottlenecked disk system. This was while doing massive inserts into a
test table (millions of narrow rows). I'm waiting to see what happens
once I have data/pg_xlog on the 2nd disk set.
Thanks for the input.