tv@xxxxxxxx ("=?UTF-8?B?VG9tw6HFoSBWb25kcmE=?=") writes: > Try to one of these: > > a) don't use INSERT statements, use a COPY instead > > b) from time to time run ANALYZE on the "public" table (say 1000 > inserts, then one analyze) > > c) create the table without constraints (primary / foreign keys in this > case), import all the data, and then create the constraints > > The (b) and (c) may be combined, i.e. import without constraints and > analyze from time to time. I'd probably try the (a) at first, anyway. > > Try to gather some more statistics - is the problem related to CPU or > I/O? Use 'dstat' for example - this might say give you a hint in case > the advices mentioned above don't help. I agree with all but b). - If you use COPY, that copies data "in bulk" which is *way* faster than submitting individual INSERT statements that must each be parsed. So I certainly agree with a). - There are two prime reasons to expect the data load to slow down: 1. Because adding entries to the index gets more expensive the larger the table gets; 2. Because searching through foreign key constraints tends to get more expensive as the target table grows. Those point to doing c). If you put off evaluating indices and foreign key constraints until all of the data is loaded, there should be no need to run ANALYZE during the COPY process. And there should be no reason for loading data to get more costly as the size of the table increases. -- let name="cbbrowne" and tld="linuxfinances.info" in name ^ "@" ^ tld;; http://linuxdatabases.info/info/advocacy.html Rules of the Evil Overlord #116. "If I capture the hero's starship, I will keep it in the landing bay with the ramp down, only a few token guards on duty and a ton of explosives set to go off as soon as it clears the blast-range." <http://www.eviloverlord.com/> ---------------------------(end of broadcast)--------------------------- TIP 4: Have you searched our list archives? http://archives.postgresql.org/