Gurjeet Singh wrote:
On Wed, Apr 23, 2008 at 12:29 AM, David Wilson <david.t.wilson@xxxxxxxxx
<mailto:david.t.wilson@xxxxxxxxx>> wrote:
On Tue, Apr 22, 2008 at 2:48 PM, Joris Dobbelsteen
<joris@xxxxxxxxxxxxxxxxxxxxx <mailto:joris@xxxxxxxxxxxxxxxxxxxxx>>
wrote:
>
> Ah, yes, all visible rows...
> My point is that, unless you use a transaction with serializable
isolation,
> this all visible rows for the second statement might be different
from those
> that you copied into the log table.
>
> With the normal Read committed isolation level you suffer from a
possible
> nonrepeatable read that might change tuple visibility between
different
> statements.
That depends on implementation. A select into ... to do the initial
copy followed by a delete where... with the where clause referencing
the log table itself to ensure that we delete only things that now
exist in the log table, or a row by row insert/delete pair. Either
would provide the appropriate level of protection from accidental
deletion of more things than you intended without harming concurrency.
The delete referencing the log table might require that the log table
be indexed for performance, but it's likely that such indexing would
be done anyway for general log use.
Of course, point is, that is another way to define "visibility" in this
context: if present in log table. Point is, a suitable definition is needed.
I think this plpgsql function would solve the problem of atomic
read-and-delete operation...
create or replace function log_rotate() returns void as $$
declare
rec record;
begin
for rec in delete from t1 returning * loop
insert into t2 values( rec.a, rec.b );
end loop;
end;
$$ language 'plpgsql';
select log_rotate();
Don't forget ordering, this was important before...
START TRANSACTION ISOLATION LEVEL REPEATABLE READ;
SELECT ... INTO log FROM staging ORDER BY ...;
DELETE FROM staging;
COMMIT;
Don't know if that ORDER BY works. It should in this case.
- Joris