blackwater dev wrote:
I have a text file that contains 200k rows. These rows are to be imported
into our database. The majority of them will already exists while a few are
new. Here are a few options I've tried:
I've had php cycle through the file row by row and if the row is there,
delete it and do a straight insert but that took a while.
Now I have php get the row from the text file and then to array_combine with
a default array I have in the class so I can have key value pairs. I then
take that generated array and do array_diff to the data array I pulled from
the db and I then have the columns that are different so I do an update on
only those columns for that specific row. This is slow and after about
180,000 rows, php throws a memory error. I'm resetting all my vars to NULL
at each iteration so am not sure what's up.
Anyone have a better way to do this? In MySQL, I could simply a replace on
each row...but not in postgres.
// clone the table.
create temporary table x as select * from othertable limit 1;
in mysql:
load data infile '/path/to/csv' into table 'x';
(http://dev.mysql.com/doc/refman/4.1/en/load-data.html)
in postgres:
\copy x from '/path/to/csv'
(no semi-colon on the end).
analyze; // make sure you do this otherwise the next step is going to
take forever.
step2:
delete stuff from the temp table that already exists:
delete from x where id in (select id from othertable);
or some such variant.
step3: insert left over stuff into real table.
--
Postgresql & php tutorials
http://www.designmagick.com/
--
PHP General Mailing List (http://www.php.net/)
To unsubscribe, visit: http://www.php.net/unsub.php