Re: How to best use 32 15k.7 300GB drives?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





2011/1/28 Scott Carey <scott@xxxxxxxxxxxxxxxxx>


On 1/28/11 9:28 AM, "Stephen Frost" <sfrost@xxxxxxxxxxx> wrote:

>* Scott Marlowe (scott.marlowe@xxxxxxxxx) wrote:
>> There's nothing wrong with whole table updates as part of an import
>> process, you just have to know to "clean up" after you're done, and
>> regular vacuum can't fix this issue, only vacuum full or reindex or
>> cluster.
>
>Just to share my experiences- I've found that creating a new table and
>inserting into it is actually faster than doing full-table updates, if
>that's an option for you.

I wonder if postgres could automatically optimize that, if it thought that
it was going to update more than X% of a table, and HOT was not going to
help, then just create a new table file for XID's = or higher than the one
making the change, and leave the old one for old XIDs, then regular VACUUM
could toss out the old one if no more transactions could see it.


I was thinking if a table file could be deleted if it has no single live row. And if this could be done by vacuum. In this case vacuum on table that was fully updated recently could be almost as good as cluster - any scan would skip such non-existing files really fast. Also almost no disk space would be wasted. 

--
Best regards,
 Vitalii Tymchyshyn

[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux