-----BEGIN PGP SIGNED MESSAGE----- Hash: RIPEMD160 > We have an application design which includes a potential 2 > billion row table (A). When the application user kicks off > an analysis process, there is a requirement to perform a > check on that table to verify that the data within hasn't changed. ... > But my main concern is performance - I fear that this won't be able to > perform (in a few seconds rather than minutes). Does the general > approach make any sense? Can anyone suggest a better starting point? It's not clear if you need to check the entire table, or just a subset related to that user each time, but one solution for either is a trigger (after insert, update, delete) that simply increments a sequence. Lock the sequence down and make the function security definer. Don't increment if an update hasn't actually changed anything. If the sequence number is not the same as last time the app checked, then the data is not the same. You can store the sequence value in a table if you need some persistence, or add multiple columns if you need to check for a user-derived subset of the data (with multiple sequences or simply increment the values in the table itself like a version control number). - -- Greg Sabino Mullane greg@xxxxxxxxxxxx End Point Corporation http://www.endpoint.com/ PGP Key: 0x14964AC8 201012100942 http://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8 -----BEGIN PGP SIGNATURE----- iEYEAREDAAYFAk0CPLYACgkQvJuQZxSWSsi8NQCgz4+bmWPMZm+aIX9maelZhj/+ wycAoNT32GFwudXF1Totvpw25+TXsu+E =jc8n -----END PGP SIGNATURE----- -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general