On Tue, 2007-08-28 at 08:19 +0100, Richard Huxton wrote: > Ow Mun Heng wrote: > > Continuining with my efforts to get similar functionality as mysql's > > mysqlimport --replace I want to ask for the list's opinion on which is > > better > > I would suggest #3 > > > 3.\copy ; update ; insert > > Don't forget to run a DELETE phase too, for rows that are no longer in > the import but are in your database. NO Delete will _ever_ be done (hopefully if there's enough disk space) for archival purposes and tracking etc. Theoretically, all the imported rows are either new rows or updated rows. > > > btw, pkey was actually a multiple column composite key which I've > > concatenated together as varchar : > > to make it simpler for the insertion/update/deletion > > What's gets complicated with a multi-column key? less typing per insert/update statement so it'll be where a.pkey = b.pkey instead of a.key1 = b.key1 and a.key2 = b.key2 and ... up to key5 > > Option 3 seems to be better because it will leave less dead tuples due > > to MVCC > > It's also better because it's what you actually want to do (in a logical > sense, you want to update existing rows). That means if you ever attach > triggers to the table they will behave as you expect. Actually, what I want is mysqlimport --replace functionality. Which is just replace the whole row w/ new data. This DB will never have to deal w/ updates to specific rows/columns. No triggering is in the plan. if there will be triggering, it will be to denormalise the table once data import is finished. This, if it comes down to it, will be triggered via perl-DBI per import cycle. which is why I like option 4 > If you're trying to keep two databases in sync then you need a process > to manage that. Don't try and make your import handle it - deal with it > properly. Nope.. Not trying to do that. No more anyway.. New DB will be just a subset of the main_db's Data. Whatever that is needed by the engineers and nothing more. (new columns = new requests for "new" columns to be added) > > >Option 4 is simpler, delete all the duplicate p_keys in the > >main_table before inserting it with the newest data (temp_table will > >_always_ have the newest unique data for that table) Besides being the illogical way of doing things. What else is wrong with option 4? > > > > On average, daily will have between 5K to 250K of unique rows > > Yes, you should make sure you are vacuuming at least once per import > cycle. However, as long as you have enough space in your free-space-map, > any dead-tuple-space will be re-used. an import cycle will pull between 100 - 1000 rows. I don't think a vaccum per import cycle is needed. Do I? > > The other option would be #5: > 1. Copy whole table from MSSQL to PG, but under new name (t1_tmp) > 2. Rename t1 => t1_old, t1_tmp => t1 > 3. Drop t1_old > If you don't have any foreign keys referring to the table, or views > depending on it then that is probably simplest, although least efficient. Actually, can't do that. Main DB data is being deleted everyday. I want to archive the old data (to be used for analysis etc) as well. ---------------------------(end of broadcast)--------------------------- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq