On Thu, May 7, 2009 at 10:14 AM, David Brain <dbrain@xxxxxxxxxxxxx> wrote:
Have you re-indexed any of your partitioned tables? If you're index is fragmented, you'll be incurring extra I/O's per index access. Take a look at the pgstattuple contrib for some functions to determine index fragmentation. You can also take a look at the pg_stat_all_indexes tables. If your number of tup's fetched is 100 x more than your idx_scans, you *may* consider reindexing.
--Scott
Hi,
Some context, we have a _lot_ of data, > 1TB, mostly in 1 'table' -
the 'datatable' in the example below although in order to improve
performance this table is partitioned (by date range) into a number of
partition tables. Each partition contains up to 20GB of data (tens of
millons of rows), with an additional ~3GB of indexes, all this is
served off a fairly high performance server (8 core 32Gb, with FC
attached SAN storage). PostgreSQL version is 8.3.5 (running on 64bit
RHEL 5.2)
This has been working reasonably well, however in the last few days
I've been seeing extremely slow performance on what are essentially
fairly simple 'index hitting' selects on this data.
Have you re-indexed any of your partitioned tables? If you're index is fragmented, you'll be incurring extra I/O's per index access. Take a look at the pgstattuple contrib for some functions to determine index fragmentation. You can also take a look at the pg_stat_all_indexes tables. If your number of tup's fetched is 100 x more than your idx_scans, you *may* consider reindexing.
--Scott