On Mon, 2008-01-21 at 12:35 -0500, blackwater dev wrote: > I have a text file that contains 200k rows. These rows are to be imported > into our database. The majority of them will already exists while a few are > new. Here are a few options I've tried: > > I've had php cycle through the file row by row and if the row is there, > delete it and do a straight insert but that took a while. > > Now I have php get the row from the text file and then to array_combine with > a default array I have in the class so I can have key value pairs. I then > take that generated array and do array_diff to the data array I pulled from > the db and I then have the columns that are different so I do an update on > only those columns for that specific row. This is slow and after about > 180,000 rows, php throws a memory error. I'm resetting all my vars to NULL > at each iteration so am not sure what's up. > > > Anyone have a better way to do this? In MySQL, I could simply a replace on > each row...but not in postgres. Does Postgres support any method of temporarily disabling keys/indexing? Indexing is what causes the inserts to take a while. MySQL can optimize an import by locking the table and allowing the keys/indexes to be temporarily disabled. You'll see the following lines in recent MySQL database dumps surrounding the inserts: /*!40000 ALTER TABLE `xxx` DISABLE KEYS */; INSERT ... INSERT ... /*!40000 ALTER TABLE `xxx` ENABLE KEYS */; Cheers, Rob. -- ........................................................... SwarmBuy.com - http://www.swarmbuy.com Leveraging the buying power of the masses! ........................................................... -- PHP General Mailing List (http://www.php.net/) To unsubscribe, visit: http://www.php.net/unsub.php