Re: Deleting millions of rows

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Feb 2, 2009 at 3:01 PM, Brian Cox <brian.cox@xxxxxx> wrote:

> In production, the table on which I ran DELETE FROM grows constantly with
> old data removed in bunches periodically (say up to a few 100,000s of rows
> [out of several millions] in a bunch). I'm assuming that auto-vacuum/analyze
> will allow Postgres to maintain reasonable performance for INSERTs and
> SELECTs on it; do you think that this is a reasonable assumption?

Yes, as long as you're deleting a small enough percentage that it
doesn't get bloated (100k of millions is a good ratio) AND autovacuum
is running AND you have enough FSM entries to track the dead tuples
you're gold.

-- 
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux