On Wed, Oct 6, 2021 at 12:00 PM Dirschel, Steve <steve.dirschel@xxxxxxxxxxxxxxxxxx> wrote:
Here is what I could see in Postgres:
- When I did an explain on the delete I could see it was full scanning the table. I did a full scan of the table interactively in less than 1 second so the long runtime was not due to the full tablescan.
If finding the rows is fast, but actually deleting them is slow and perhaps won't even finish, I would strongly consider adding a where clause such that a small fraction of the deletes would be done (perhaps in a transaction that gets rolled back) and do the explain (analyze, buffers) on that modified command. Yes, the planner may decide to use an index to find which rows to delete, but if finding the rows was already fast and it is the non-obvious work that we want to profile, then it should be fine to do 1% of the deletes and see how it performs and where the time goes.