On Thu, Jun 11, 2009 at 2:48 PM, Marko Kreen<markokr@xxxxxxxxx> wrote: > On 6/11/09, Matt Amos <zerebubuth@xxxxxxxxx> wrote: >> On Thu, Jun 11, 2009 at 1:13 PM, Brett Henderson<brett@xxxxxxxxxx> wrote: >> >> See pgq.batch_event_sql() function in Skytools [2] for how to >> >> query txids between snapshots efficiently and without being affected >> >> by long transactions. >> > >> > I'll take a look. >> >> it was looking at the skytools stuff which got me thinking about using >> txids in the first place. someone on the osm-dev list had suggested >> using PgQ, but we weren't keen on the schema changes that would have >> been necessary. > > Except the trigger, PgQ does not need any schema changes? i've been having a look and it seems to me that PgQ requires some extra tables as well as the trigger. am i missing something? PgQ might be a good solution, but i'm worried that after calling pgq.finish_batch() the batch is released. this would mean it wouldn't be possible to regenerate older files (e.g: a few days to a week) in case something unexpected went wrong. it might not be a major problem, though. i think we could get the same functionality without the extra daemons, by putting a trigger on those tables for insert and recorded the object id, version and 64-bit txid in another table. but if we're going to alter the schema we might as well put the txid column directly into those tables... cheers, matt -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general