Scott,
There were no foreign keys (even no indices) during data import, and none of
the tables had more than 4000 records. And I have checked the log for
durations, and all insert statements were 0.000 ms. So it seems that the
problem is not at the server.
During the process no other application did anything. No other HDD activity
either.
Best Regadrs,
Otto
----- Original Message -----
From: "Scott Marlowe" <smarlowe@xxxxxxxxxxxxxxxxx>
To: "Havasvölgyi Ottó" <h.otto@xxxxxxxxxxx>
Cc: "Tom Lane" <tgl@xxxxxxxxxxxxx>; <pgsql-general@xxxxxxxxxxxxxx>
Sent: Tuesday, August 02, 2005 5:57 PM
Subject: Re: feeding big script to psql
On Tue, 2005-08-02 at 04:24, Havasvölgyi Ottó wrote:
Tom,
Thanks for the suggestion. I have just applied both switch , -f (I have
applied this in the previous case too) and -n, but it becomes slow again.
At
the beginning it reads about 300 KB a second, and when it has read 1.5 MB,
it reads only about 10 KB a second, it slows down gradually. Maybe others
should also try this scenario. Can I help anything?
I be you've got an issue where a seq scan on an fk field or something
works fine for the first few thousand rows. At some point, pgsql should
switch to an index scan, but it just doesn't know it.
Try wrapping every 10,000 or so inserts with
begin;
<insert 10,000 rows>
commit;
analyze;
begin;
rinse, wash repeat.
You probably won't need an analyze after the first one though.
---------------------------(end of broadcast)---------------------------
TIP 5: don't forget to increase your free space map settings
---------------------------(end of broadcast)---------------------------
TIP 4: Have you searched our list archives?
http://archives.postgresql.org