Search Postgresql Archives

Re: "Healing" a table after massive updates

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Sep 11, 2008 at 8:56 AM, Bill Moran
<wmoran@xxxxxxxxxxxxxxxxxxxxxxx> wrote:
> In response to Alvaro Herrera <alvherre@xxxxxxxxxxxxxxxxx>:
>
>> Bill Moran wrote:
>> > In response to "Gauthier, Dave" <dave.gauthier@xxxxxxxxx>:
>> >
>> > > I might be able to answer my own question...
>> > >
>> > > vacuum FULL (analyze is optional)
>> >
>> > CLUSTER _may_ be a better choice, but carefully read the docs regarding
>> > it's drawbacks first.  You may want to do some benchmarks to see if it's
>> > really needed before you commit to it as a scheduled operation.
>>
>> What drawbacks?
>
> There's the whole "there will be two copies of the table on-disk" thing
> that could be an issue if it's a large table.

I've also found cluster to be pretty slow, even on 8.3.  On a server
that hits 30-40Megs a second write speed for random access during
pgbench, it's writing out at 1 to 2 megabytes a second when it runs,
and takes the better part of a day on our biggest table.  vacuumdb -fz
+ reindexdb ran in about 6 hours which means we could fit it into our
maintenance window.  vacuum moves a lot more data per second than
cluster.


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]
  Powered by Linux