Scott Marlowe wrote: > On Fri, Oct 23, 2009 at 2:32 PM, Jesper Krogh <jesper@xxxxxxxx> wrote: >> Tom Lane wrote: >>> Jesper Krogh <jesper@xxxxxxxx> writes: >>>> Tom Lane wrote: >>>>> ... There's something strange about your tsvector index. Maybe >>>>> it's really huge because the documents are huge? >>>> huge is a relative term, but length(ts_vector(body)) is about 200 for >>>> each document. Is that huge? >>> It's bigger than the toy example I was trying, but not *that* much >>> bigger. I think maybe your index is bloated. Try dropping and >>> recreating it and see if the estimates change any. >> I'm a bit reluctant to dropping it and re-creating it. It'll take a >> couple of days to regenerate, so this should hopefully not be an common >> situation for the system. > > Note that if it is bloated, you can create the replacement index with > a concurrently created one, then drop the old one when the new one > finishes. So, no time spent without an index. Nice tip, thanks. >> It is build from scratch using inserts all the way to around 10m now, >> should that result in index-bloat? Can I inspect the size of bloat >> without rebuilding (or similar locking operation)? > > Depends on how many lost inserts there were. If 95% of all your > inserts failed then yeah, it would be bloated. Less than 10.000 I'd bet, the import-script more or less ran by itself the only failures where when I manually stopped it to add some more code in it. -- Jesper -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance