Hi, I have a simple table that has indexes on 2 integer columns. Data is inserted very often (no updates, no deletes, just inserts): at least 4000/5000 rows per second. The input for the 2 indexed columns is very random. Everything is "fine" for the first 10-20M rows; after that, performance gets worse and worse, and by 50M rows I can't insert more than 1500 rows per second. How can I improve it? 1) the table is already partitioned; at the moment it's based on 30 partitions. What kind of "select" performance impact would I hit going to 3000 partitions? I know in the 3000 partition case there will be less rows per partition, but index seek times are O(logN), right? So: 3000*(log(N/3000)) is much bigger than 30*(log(N/30)) 2) How much improvement can I expect going to RAID1+0 on 15rpm disks instead of raid5 on 7200 disks? I know, I know, I shouldn't even be testing such a huge workload with such a setup, but can't try anything else at the moment. Disk array is pretty fast anyway: 190MB seq output, 320MB seq input, 1600 random seeks / s (bonnie++) 3) Is there anything else I can try to "help" postgres update those index faster? -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general