On Jan 26, 2008 5:42 AM, growse <nabble@xxxxxxxxxx> wrote: > > > > Scott Marlowe-2 wrote: > > > > On Jan 25, 2008 5:27 PM, growse <nabble@xxxxxxxxxx> wrote: > >> > >> Hi, > >> > >> I've got a pg database, and a batch process that generates some metadata > >> to > >> be inserted into one of the tables. Every 15 minutes or so, the batch > >> script > >> re-calculates the meta data (600,000 rows), dumps it to file, and then > >> does > >> a TRUNCATE table followed by a COPY to import that file into the table. > >> > >> The problem is, that whilst this process is happening, other queries > >> against > >> this table time out. I've tried to copy into a temp table before doing an > >> "INSERT INTO table (SELECT * FROM temp)", but the second statement still > >> takes a lot of time and causes a loss of performance. > > > > Can you import to another table then > > > > begin; > > alter table realtable rename to garbage; > > alter table loadtable rename to realtable; > > commit; > > > > ? > > This is a possibility. My question on this is that would an ALTER TABLE real > RENAME TO garbage be faster than a DROP TABLE real? I don't know. They're both pretty fast. I'd do a test, with parallel contention on the table and see. ---------------------------(end of broadcast)--------------------------- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match