I'm not an expert,
I would think if you can spare using only one transaction , it would be way way faster to do it !
the system simply could skip keeping log to be ready to roll back for a 1 billion row update !
I would think if you can spare using only one transaction , it would be way way faster to do it !
the system simply could skip keeping log to be ready to roll back for a 1 billion row update !
Of course it would be preferable to use psql to execute statement by statement as separate transactions , and do it with X several parallel psql (splitting the big text file into X parts), yet Joey seemed reluctant to use console =)
Rémi-C
2013/11/27 Albe Laurenz <laurenz.albe@xxxxxxxxxx>
John R Pierce wrote:Yes, but that would slow down processing considerably, which would
> On 11/26/2013 9:24 AM, Joey Quinn wrote:
>> When I ran that command (select * from pg_stat_activity"), it returned
>> the first six lines of the scripts. I'm fairly sure it has gotten a
>> bit beyond that (been running over 24 hours now, and the size has
>> increased about 300 GB). Am I missing something for it to tell me what
>> the last line processed was?
>
> that means your GUI lobbed the entire file at postgres in a single
> PQexec call, so its all being executed as a single statement.
>
> psql -f "filename.sql" dbname would have processed the queries one at
> a time.
not help in this case.
I'd opt for
psql -1 -f "filename.sql" dbname
so it all runs in a single transaction.
Yours,
Laurenz Albe
--
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general