Re: autocommit (true/false) for more than 1 million records

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On the COPY's atomicity -- looking for a definitive answer from a core
developer, not a user's guess, please.

Suppose I COPY a huge amount of data, e.g. 100 records.

My 99 records are fine for the target, and the 100-th is not -- it
comes with a wrong record format or a target constraint violation.

The whole thing is aborted then, and the good 99 records are not
making it into the target table.

My question is: Where are these 99 records have been living, on the
database server, while the 100-th one hasn't come yet, and the need to
throw the previous data accumulation away has not come yet?

There have to be some limits to the space and/or counts taken by the
new, uncommitted, data, while the COPY operation is still in progress.
What are they?

Say, I am COPYing 100 TB of data and the bad records are close to the
end of the feed -- how will this all error out?

Thanks,

-- Alex



On Mon, Aug 25, 2014 at 11:48 AM, Jeff Janes <jeff.janes@xxxxxxxxx> wrote:
On Fri, Aug 22, 2014 at 1:49 PM, Emi Lu <emilu@xxxxxxxxxxxxxxxxx> wrote:
Hello,


Trying to insert into one table with 1 million records through java JDBC into psql8.3. May I know (1) or (2) is better please?

(1) set autocommit(true)
(2) set autocommit(false)
     commit every n records (e.g., 100, 500, 1000, etc)

In general it is better to use COPY (however JDBC for 8.3. exposes that), as that is designed specifically for bulk loading.

Then it doesn't matter whether autocommit is on or off, because the COPY is a single statement.

Cheers,

Jeff


[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux