On 22/04/11 01:33, Florian Weimer wrote:
* Greg Smith:
The fact that every row update can temporarily use more than 8K means
that actual write throughput on the WAL can be shockingly large. The
smallest customer I work with regularly has a 50GB database, yet they
write 20GB of WAL every day. You can imagine how much WAL is
generated daily on systems with terabyte databases.
Interesting. Is there an easy way to monitor WAL traffic in away? It
does not have to be finegrained, but it might be helpful to know if
we're doing 10 GB, 100 GB or 1 TB of WAL traffic on a particular
database, should the question of SSDs ever come up.
One thought I had on monitoring write usage..
If you're on Linux with the ext4 filesystem, then it keeps track of some
statistics for you.
Check out /sys/fs/ext4/$DEV/lifetime_write_kbytes
(where $DEV is the device the fs is mounted on, eg. sda1, or dm-0, or
whatnot - see /dev/mapper to get mappings from LVMs to dm-numbers)
If you log that value every day, you could get an idea of your daily
write load.
-Toby
--
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general