Search Postgresql Archives

Re: Delete/update with limit

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



First of all, thanks for all the suggestions.

> put a SERIAL primary key on the table
Or:
> Maybe add OIDs to the table, and delete based on the OID number?

No, this is not acceptable, it adds overhead to the insertions. Normally
the overhead will be small enough, but on occasions it is noticeable. 

> Loop Forever
>   DELETE from incoming_table;
>   VACUUM incoming_table;
> End Loop;

Not workable either, it still won't assure the table never getting too
big. Once the table is too big, it takes too much to process, and it
gets even bigger for the next time. The whole thing is transient (i.e.
the load will smooth out after a while), but then exactly when it should
work it doesn't... and if you didn't guess, the users want the results
immediately, not next day so we could do the processing at night when we
have virtually no load.

> Use partitioning: don't delete, just drop the partition after a while.

OK, this could work.

It will still be completely different than the code for the other DBs,
but it will work.

Thanks again for all the suggestions,
Csaba.





[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]
  Powered by Linux