Re: Autovacuum issues with truncate and create index ...

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

> Could you show that output you base that on?

EXPLAIN on table which was recently analyzed by the autovacuum process : 

explain delete from agg_t1343_incoming_a3_src_net_and_dst_net_f5 where start_date < 1353317127200;
                                                                         QUERY PLAN                                                                          
--------------------------------------------------------------------------------------------------------------------------------------Index Scan using agg_t1343_incoming_a3_src_net_and_dst_net_f5_start_date on agg_t1343_incoming_a3_src_net_and_dst_net_f5 (cost=0.00..9.01 rows=41 width=6)
    Index Cond: (start_date < 1353317127200::bigint)
(2 rows)

Here you can find the duration and the number of deleted rows of previous query.
Query duration :  4s 538ms  for 15960 rows deleted



EXPLAIN on table which was analyzed four days ago by the autovacuum process : 

explain delete from agg_t1187_incoming_a6_dst_port_and_proto_f5 where start_date < 1353317127200;
                                            QUERY PLAN                                            
--------------------------------------------------------------------------------------------------
 Seq Scan on agg_t1187_incoming_a6_dst_port_and_proto_f5  (cost=0.00..58063.86 rows=3269 width=6)
   Filter: (start_date < 1353317127200::bigint)
(2 rows)

Here you can find the duration and the number of deleted rows of previous query.
Query duration :  52s  368ms  for 21130 rows deleted

The first EXPLAIN mention that an Index scan is used but the second one mention that a sequence scan is used.

That why we concluded that we had to have our index up-to-date.


> When the bottleneck is disk I/O the CPUs count is not going to
> help. Threads which have not been context-switched out, but are
> sitting waiting for the electric motors to drag the disk arm to the
> right cylinder probably don't count against the load average.

My mistake, when I said that the server had a 0.56 load over the last 15 minutes, it was before the change on autovacuum thresholds, because after that the load just blow up. 

> Note that while three autovacuum processes normally don't cause any
> grief, you seem to be near the tipping point anyway, so it may be a
> case of "the straw that broke the camel's back". Especially since
> you made autovacuum many times more resource-hungry than it is by
> default.

We tried to change the autovacuum thresholds without changing autovacuum_vacuum_cost_delay and autovacuum_vacuum_cost_limit.

The server managed to handle the work during 25 minutes then it just started to take too much time on CREATE INDEX statements.

I just figure that when we perform the full process (with deletion of old data), CREATE INDEX and TUNCATE take too much time.

At the same time the autovacuum process seems to perform more tasks on second type of tables.

Best regards,

Baptiste

--- 
Baptiste LHOSTE 
blhoste@xxxxxxxxxxx 

ALALOOP S.A.S. - Technopole Izarbel - 64210 Bidart 
Téléphone : +33 (0) 5 59 41 51 10 
www.alaloop.com 


-- 
Sent via pgsql-admin mailing list (pgsql-admin@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux