On Mon, May 15, 2006 at 11:17:31AM -0400, Tom Lane wrote: > "Jim C. Nasby" <jnasby@xxxxxxxxxxxxx> writes: > > On Sun, May 14, 2006 at 10:50:50PM -0400, Tom Lane wrote: > >> Enzo Daddario <enzo@xxxxxxxxxxxxxxx> writes: > >>> I am now required the remove thousands of obsolete records from table > >>> "X" and even though I have removed all related rows in all related > >>> tables, the deletion of rows in table "X" is taking WAY TOO LONG. > >> > >> You need to put indexes on the referencing columns. Then start a fresh > >> session for doing the actual deletes (FK check plans are usually cached > >> per-session). > > > Would SET CONSTRAINTS ... DEFERRED not help, or does it still use the > > same machinery to do the checking, regardless of how much data there is > > to check? > > It's the same machinery. We've speculated about having the thing switch > over to doing a full-table recheck (comparable to what ADD CONSTRAINT > FOREIGN KEY does) once the number of pending individual row checks > exceeds some threshold, but that's not done yet --- and it's not clear > how to do it in a concurrent fashion, so don't hold your breath ... I'm assuming that in order for deferred constraints to work we must have a list of what's changed... couldn't that list be fed into an appropriate query and then planned accordingly? This would allow the backend to use the best validation method possible; merge join, hash join, etc. If the number of changed rows is small, the planner would probably pick nested loop (I'm assuming that's roughly how the IMMEDIATE case works); if the number of changed rows is large it'd favor something else, but at least it wouldn't be re-checking the entire source table. -- Jim C. Nasby, Sr. Engineering Consultant jnasby@xxxxxxxxxxxxx Pervasive Software http://pervasive.com work: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461