Pavel Stehule <pavel.stehule@xxxxxxxxx> writes: > po 30. 3. 2020 v 10:12 odesílatel Silvio Moioli <moio@xxxxxxx> napsal: >> -> Sort (cost=299108.00..300335.41 rows=490964 width=79) >> (actual time=6475.147..6494.111 rows=462600 loops=1) >> Output: rhnpackagecapability_1.name, >> rhnpackagecapability_1.version, rhnpackagecapability_1.id >> Sort Key: rhnpackagecapability_1.name >> Sort Method: quicksort Memory: 79862kB >> Buffers: shared hit=7217 >> -> Seq Scan on public.rhnpackagecapability rhnpackagecapability_1 (cost=0.00..252699.00 rows=490964 width=79) (actual time=0.016..59.976 rows=490964 loops=1) >> -> Sort (cost=299108.00..300335.41 rows=490964 >> width=79) (actual time=6458.988..6477.151 rows=462600 loops=1) >> Output: rhnpackagecapability.id, >> rhnpackagecapability.name, rhnpackagecapability.version >> Sort Key: rhnpackagecapability.name >> Sort Method: quicksort Memory: 79862kB >> Buffers: shared hit=7217 >> -> Seq Scan on public.rhnpackagecapability (cost=0.00..252699.00 rows=490964 width=79) (actual time=0.012..50.467 rows=490964 loops=1) > I did some tests and it looks so a penalization for sort long keys is not > too high. In your case it is reason why sort is very slow (probably due > slow locales). Then the cost of hash join and sort is similar, although in > reality it is not true. Yeah, the run time of the slow query seems to be almost entirely expended in these two sort steps, while the planner doesn't think that they'll be very expensive. Tweaking unrelated cost settings to work around that is not going to be helpful. What you'd be better off trying to do is fix the slow sorting. Is rhnpackagecapability.name some peculiar datatype? If it's just relatively short text strings, as one would guess from the column name, then what you must be looking at is really slow locale-based sorting. What's the database's LC_COLLATE setting? Can you get away with switching it to C? regards, tom lane