Just once.
OK, another potential problem eliminated, it gets strange...
If I have 5000 lines in CSV file (that I load into 'temporary' table using COPY) i can be sure that drone_id there is PK. That is because CSV file contains measurements from all the drones, one measurement per drone. I usualy have around 100 new drones, so I insert those to drones and to drones_history. Then I first insert into drones_history and then update those rows in drones. Should I try doing the other way around?
No, it doesn't really matter.
Although, I think I'm having some disk-related problems because when inserting to the tables my IO troughput is pretty low. For instance, when I drop constraints and then recreate them that takes around 15-30 seconds (on a 25M rows table) - disk io is steady, around 60 MB/s in read and write. It just could be that the ext3 partition is so fragmented. I'll try later this week on a new set of disks and ext4 filesystem to see how it goes.
If you CLUSTER a table, it is entirely rebuilt so if your disk free space isn't heavily fragmented, you can hope the table and indexes will get allocated in a nice contiguous segment.
-- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance