On 2012-02-15, Asher Hoskins <asher@xxxxxxxxxx> wrote: > Hello. > > I've got a database with a very large table (currently holding 23.5 > billion rows, the output of various data loggers over the course of my > PhD so far). The table itself has a trivial structure (see below) and is > partitioned by data time/date and has quite acceptable INSERT/SELECT > performance. > > CREATE TABLE rawdata ( > value REAL NOT NULL, > sample_time TIMESTAMP WITH TIME ZONE NOT NULL, > -- Dataset reference. > dataset INTEGER NOT NULL > ); > The data loggers are collected every month or so and uploaded into the > database, resulting in another 1-2 billion rows in the table each time. > Data is never deleted from the table and so it's essentially read-only. > > My problem is that the autovacuum system isn't keeping up with INSERTs > and I keep running out of transaction IDs. SELECT performance also drops > off the more I insert, which from looking at the output of iostat seems > to be because the autovacuum tasks are taking up a lot of the disk > bandwidth - the disks are commodity items in a software RAID and not > terribly fast. have you tried using COPY instead of INSERT (you'll have to insert into the correct partition) or altertatiely putting several rows in the VALUES part of the insert (that should work with trigger based partitioning) insert into rawdata values (1.0,'2012-03-25 16:29:01 +13',1), (1.1,'2012-03-25 16:29:02 +13',1), (1.15,'2012-03-25 16:29:03 +13',1), (1.17,'2012-03-25 16:29:04 +13',1), (1.18,'2012-03-25 16:29:05 +13',1); etc... -- ⚂⚃ 100% natural -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general