I wrote: > Gary Doades <gpd@xxxxxxxxxxxx> writes: >> Ouch! That confirms my problem. I generated the random test case because >> it was easier than including the dump of my tables, but you can >> appreciate that tables 20 times the size are basically crippled when it >> comes to creating an index on them. > Actually... we only use qsort when we have a sorting problem that fits > within the allowed sort memory. The external-sort logic doesn't go > through that code at all. So all the analysis we just did on your test > case doesn't necessarily apply to sort problems that are too large for > the sort_mem setting. I increased the size of the test case by 10x (basically s/100000/1000000/) which is enough to push it into the external-sort regime. I get amazingly stable runtimes now --- I didn't have the patience to run 100 trials, but in 30 trials I have slowest 11538 msec and fastest 11144 msec. So this code path is definitely not very sensitive to this data distribution. While these numbers aren't glittering in comparison to the best-case qsort times (~450 msec to sort 10% as much data), they are sure a lot better than the worst-case times. So maybe a workaround for you is to decrease maintenance_work_mem, counterintuitive though that be. (Now, if you *weren't* using maintenance_work_mem of 100MB or more for your problem restore, then I'm not sure I know what's going on...) We still ought to try to fix qsort of course. regards, tom lane