Tomasz Ostrowski wrote:
On 2008-09-11 17:21, Jack Orenstein wrote:
Then do the processing in separate transactions like this (in pseudocode):
The id > last_id trick doesn't work for me -- I don't have an index that would
support it efficiently.
Turning on autocommit seems to work, I'm just not clear on the reason why.
Not knowing would bite you some time.
Please provide some (pseudo-)code on what you do. Do you mark rows as
processed? Do you save output of processing to a database? IMHO without
it it is hard to solve a mystery but I'll try below.
No, it's really as simple as what I said in earlier email. The scan just
walks through BIG very slowly. On another connection, we're inserting/updating
the same table, and in each transaction also updating TINY.
I played around with a JDBC test program, and so far cannot see how
the autocommit mode causes variations in what is seen by the
scan. The behavior I've observed is consistent with the SERIALIZABLE
isolation level, but 1) I thought the default was READ COMMITTED
When you do:
result = query("select something from sometable")
then all rows of a result will be cached by a client program.
I am very sure this is not happening. Maybe some rows are being cached
(specifying fetch size), but certainly not all of them. It used to, with older
drivers, (7.4?) but I've been using 8.1 drivers (at least) for a long time.
Maybe some result set options you're using cause such memory usage?
Jack