In this case, there are no updates/deleted - only inserts. For now, I have set per-table autovacuum rules in order to minimize the frequency of vacuums but to ensure the statistics are updated frequently with analyze:
Table auto-vacuum VACUUM base threshold 500000000
Table auto-vacuum VACUUM scale factor 0.3
Table auto-vacuum ANALYZE base threshold 50000
Table auto-vacuum ANALYZE scale factor 0.02
Table auto-vacuum VACUUM cost delay 20
Table auto-vacuum VACUUM cost limit 200
On Wed, Nov 16, 2011 at 9:31 PM, Tom Lane <tgl@xxxxxxxxxxxxx> wrote:
John R Pierce <pierce@xxxxxxxxxxxx> writes:
> On 11/16/11 4:24 PM, Jason Buberel wrote:> no updates either?
>> Just wondering if there is ever a reason to vacuum a very large table
>> (> 1B rows) containing rows that never has rows deleted.
To clarify: in Postgres, an "update" means an insert and a delete.
So unless you mean that this table is insert-only, you certainly
still need vacuum.
> you still want to do a vacuum analyze every so often to update the
> statistics used by the planner.
If it's purely an insert-only table, such as a logging table, then in
principle you only need periodic ANALYZEs and not any VACUUMs.
VACUUM could still be worthwhile though, because (a) it will set commit
hint bits on all pages and (b) it will set visibility-map bits on all
pages. An ANALYZE would only do those things for the random sample of
pages that it visits. While neither of those things are critical, they
do offload work from future queries that would otherwise have to do that
work in-line. So if you've got a maintenance window where the database
isn't answering queries anyway, it could be worthwhile to run a VACUUM
just to get those bits set.
regards, tom lane
--
Jason L. Buberel
CTO, Altos Research
http://www.altosresearch.com/
650.603.0907