Dave Huber wrote:
I am inserting 250 rows of data (~2kbytes/row) every 5 seconds into a table (the primary key is a big serial). I need to be able to limit the size of the table to prevent filling up the disk. Is there a way to setup the table to do this automatically or do I have to periodically figure out how many rows are in the table and delete the oldest rows manually?
I think you'll find row deletes would kill your performance. For time aged data like that, we use partitioned tables, we typically do it by the week (keeping 6 months of history), but you might end up doing it by N*1000 PK values or some such, so you can use your PK to determine the partition. With a partitioning scheme, its much faster to add a new one and drop the oldest at whatever interval you need. See http://www.postgresql.org/docs/current/static/ddl-partitioning.html
-- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general