Re: REINDEX takes half a day (and still not complete!)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



You mean the maintenance instead of mentioning the recovery? If yes

The following types of administration commands are not accepted during recovery mode:

Thanks.


On Sun, Apr 17, 2011 at 5:30 PM, Phoenix Kiula <phoenix.kiula@xxxxxxxxx> wrote:
Sorry, rejuvenating a thread that was basically unanswered.

I closed the database for any kinds of access to focus on maintenance
operations, killed all earlier processes so that my maintenance is the
only stuff going on.

REINDEX is still taking 3 hours -- and it is still not finished!

Similarly, if I cancel the REINDEX and issue a VACUUM ANALYZE VERBOSE,
this too seems to just hang there on my big table.

I changed the maintenance_work_men to 2GB for this operation. It's
highly worrisome -- the above slow times are with 2GB of my server
dedicated to Postgresql!!!!

Surely this is not tenable for enterprise environments? I am on a
64bit RedHat server with dual CPU Intel Woodcrest or whatever that was
called. Postgres is 8.2.9.

How do DB folks do this with small maintenance windows? This is for a
very high traffic website so it's beginning to get embarrassing.

Would appreciate any thoughts or pointers.

Thanks!



On Mon, Mar 21, 2011 at 9:28 PM, Merlin Moncure <mmoncure@xxxxxxxxx> wrote:
> On Fri, Mar 18, 2011 at 10:07 PM, Phoenix Kiula <phoenix.kiula@xxxxxxxxx> wrote:
>> I have a large table but not as large as the kind of numbers that get
>> discussed on this list. It has 125 million rows.
>>
>> REINDEXing the table takes half a day, and it's still not finished.
>>
>> To write this post I did "SELECT COUNT(*)", and here's the output -- so long!
>>
>>    select count(*) from links;
>>       count
>>    -----------
>>     125418191
>>    (1 row)
>>
>>    Time: 1270405.373 ms
>>
>> That's 1270 seconds!
>>
>> I suppose the vaccuum analyze is not doing its job? As you can see
>> from settings below, I have autovacuum set to ON, and there's also a
>> cronjob every 10 hours to do a manual vacuum analyze on this table,
>> which is largest.
>>
>> PG is version 8.2.9.
>>
>> Any thoughts on what I can do to improve performance!?
>>
>> Below are my settings.
>>
>>
>>
>> max_connections              = 300
>> shared_buffers               = 500MB
>> effective_cache_size         = 1GB
>> max_fsm_relations            = 1500
>> max_fsm_pages                = 950000
>>
>> work_mem                     = 100MB
>> temp_buffers                 = 4096
>> authentication_timeout       = 10s
>> ssl                          = off
>> checkpoint_warning           = 3600
>> random_page_cost             = 1
>>
>> autovacuum                   = on
>> autovacuum_vacuum_cost_delay = 20
>>
>> vacuum_cost_delay            = 20
>> vacuum_cost_limit            = 600
>>
>> autovacuum_naptime           = 10
>> stats_start_collector        = on
>> stats_row_level              = on
>> autovacuum_vacuum_threshold  = 75
>> autovacuum_analyze_threshold = 25
>> autovacuum_analyze_scale_factor  = 0.02
>> autovacuum_vacuum_scale_factor   = 0.01
>>
>> wal_buffers                  = 64
>> checkpoint_segments          = 128
>> checkpoint_timeout           = 900
>> fsync                        = on
>> maintenance_work_mem         = 512MB
>
> how much memory do you have? you might want to consider raising
> maintenance_work_mem to 1GB.  Are other things going on in the
> database while you are rebuilding your indexes?  Is it possible you
> are blocked waiting on a lock for a while?
>
> How much index data is there?  Can we see the table definition along
> with create index statements?
>
> merlin
>

--
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux