I have an application that makes heavy use of foreign keys all over the tables. This is very nice since the data is very consistent. There also this "central" table which holds "sites" in it. A site pretty much is the crux of it all. Deleting a site will very precisely eliminate all data regarding it, since there's CASCADE on delete's everywhere. The only trouble I'm having is that the original developers apparently didn't account for large amounts of data. I'm starting to get a LOT of data in some tables, and nowadays deleting a site will take a disgusting amount of time (in the range of tens of minutes). It's impossible to do it via Web, so I have to issue the central delete from the shell and leave it running until it's done. Is there any way I can make things better? I could queue site drops and have a cronjob pick them up instead of deleting "live" via Web, but that's just silly patchwork IMHO. ---------------------------(end of broadcast)--------------------------- TIP 5: don't forget to increase your free space map settings