Hello Postgres Hackers, We have a simple 'event log' table that is insert only (by multiple concurrent clients). It has an integer primary key. We want to do incremental queries of this table every 5 minutes or so, i.e. "select * from events where id > LAST_ID_I_GOT" to insert into a separate reporting database. The problem is, this simple approach has a race that will forever skip uncommitted events. I.e., if 5000 was committed sooner than 4999, and we get 5000, we will never go back and get 4999 when it finally commits. How can we solve this? Basically it's a phantom row problem but it spans transactions. I looked at checking the internal 'xmin' column but the docs say that is 32 bit, and something like 'txid_current_snapshot' returns a 64 bit value. I don't get it. All I want to is make sure I skip over any rows that are newer than the oldest currently running transaction. Has nobody else run into this before? Thank you very much. -- Karl Pickett -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general