Hi all I am no database wizard so I am hoping someone may be able to assist me :) I get a daily CSV file of 6.5 million records. I create a temporary table and COPY them in. On completion I create an index on the mdnid column. This column is also indexed in table 2. This part is very fast. I had some 'checkpoint too often' issues, but that I have resolved. I then use the following procedure to update all the records, and if a record does not exist, insert it instead: declare v_update e_lookup_load%ROWTYPE; begin for v_update in select * from e_lookup_load loop update e_lookup set cur_balance = v_update.cur_balance, last_usage = v_update.last_usage, b_code = v_update.b_code, u_date = v_update.u_date, date_of_insert = v_update.date_of_insert where mdnid = v_update.mdnid; if NOT FOUND then begin insert into e_lookup(mdnid,cur_balance,last_usage,b_code,u_date,date_of_insert) values (v_update.mdnid, v_update.cur_balance, v_update.last_usage, v_update.b_code, v_update.u_date, v_date_of_insert); end; end if; end loop; end; >From my understanding, a for loop is encapsulated in a cursor anyway so no need to do that. Am I fundamentally doing something wrong as the operation is slow as molasses? Maybe there is a better way altogether that I have not thought of? The bottom line is that at no point can the e_lookup table be unavailable to clients, else I would have just done a simple drop and rename post the COPY. Kind Regards Craig -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general