Peter Meszaros wrote:
Hi All,
I use postgresql 8.3.7 as a huge queue. There is a very simple table
with six columns and two indices, and about 6 million records are
written into it in every day continously commited every 10 seconds from
8 clients. The table stores approximately 120 million records, because a
cron job daily deletes those ones are older than 20 day.
You may be an ideal candidate for table partitioning - this is
frequently used for rotating log table maintenance.
Use a parent table and 20 child tables. Create a new child every day and
drop the 20-day-old table. Table drops are far faster and lower-impact
than delete-from a 120-million row table. Index-bloat is limited to
one-day of inserts and will be eliminated in 20-days. No deletes means
no vacuum requirement on the affected tables. Single tables are limited
to about 6-million records. A clever backup scheme can ignore
prior-days' static child-tables (and you could keep
historical-data-dumps off-line for later use if desired).
Read up on it here:
http://www.postgresql.org/docs/8.4/interactive/ddl-partitioning.html
Cheers,
Steve
--
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance