Search Postgresql Archives

Re: Vaccum Stalling

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Brad Nicholson <bnichols@xxxxxxxxxxxxxxx> writes:
> On Tue, 2007-07-10 at 11:31 -0400, Tom Lane wrote:
>> How big is this index again?

> Not sure which one it's working on - there are 6 of them each are ~
> 2.5GB

OK, about 300K pages each ... so even assuming the worst case that
each page requires a physical disk seek, it should take less than an
hour to vacuum each one.  So 10 hours is beginning to sound a bit
suspicious to me too, though it's not beyond the threshold of
incredulity quite yet.

It's conceivable that that index has been corrupted in such a way
that there's a loop of pages whose right-links point back to each other,
which would cause the btbulkdelete scan to never terminate.  If that's
the case then the best fix is to REINDEX.  But I think I'd counsel
letting the VACUUM run awhile longer first, just in case it will finish;
unless you have clear evidence that it won't, like previous runs having
also gone until killed.  One thing you could try is strace'ing the
vacuum for awhile to see if you can detect any evidence of fetching the
same pages over and over.  (This would also help you find out which
index it's working on.)

			regards, tom lane


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]
  Powered by Linux