Search Postgresql Archives

Best approach for large table maintenance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I have an application where I drop, recreate, reload, and recreate
indexes on a 1 million row table each day. I do this to avoid having to
run vacuum on the table in the case where I might use DELETE or UPDATEs
on deltas. 

It seems that running vacuum still has value in the above approach
because I still see index row versions were removed. I do not explicitly
drop the indexes because they are dropped with the table.

In considering the use of TRUNCATE I sill have several indexes that if
left in place would slow down the data load.

My question is, what is the best way to manage a large table that gets
reloaded each day?

Drop
Create Table
Load (copy or insert/select)
Create Indexes
Vacuum anyway?

Or...

DROP indexes
Truncate
Load (copy or insert/select)
Create Indexes

And is vacuum still going to be needed?

Many Thanks,
Mike




[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]
  Powered by Linux