Hi all, I've recently asked a similar question, which received no useful answer yet, so I'll drop in too. In my case, the table I was inserting to was a quite big one already to start with (and analyzed so), so I was expecting that it will not slow down due to indexes, as they were quite big to start with as I said. What I mean is that I expected that the speed will be more or less constant over the whole inserting. But the result was that after a while the average insert speed dropped considerably and suddenly, which I can't explain and would like to know what caused it... The table was ~100 million live rows and quite often updated, and the insert was ~40 million rows. After ~10 million rows the average speed dropped suddenly about 4 times. My only suspicion would be that the table had a quite big amount of free space in it at the beginning due to the fact that it is quite often updated, and then the free space was exhausted. So the speed difference might come from the difference in using free space versus creating new pages ? Or the same thing for the b-tree indexes. Is there any other reasonable explanation for this ? As I see this kind of behavior consistently, speed OK on start of inserting, and then slow down, and I would like to know if I can tune my DB to cope with it or just accept that it works like this... Cheers, Csaba. On Mon, 2005-09-26 at 19:58, Jim C. Nasby wrote: > On Mon, Sep 26, 2005 at 05:55:18PM +0530, surabhi.ahuja wrote: > > i have seen that after insertion say (20 k rows) the insertion to tables becomes slow..why is it? > > Most likely due to indexes. > > > is there any way in which u can stop the performance from degrading > > If you're loading from scratch, don't create the indexes until after the > load is done. ---------------------------(end of broadcast)--------------------------- TIP 5: don't forget to increase your free space map settings