On Thu, 31 Aug 2023 at 06:32, Rondat Flyag <rondatflyag@xxxxxxxxx> wrote: > I tried VACUUM ANALYZE for three tables, but without success. I also tried to set enable_seqscan=off and the query took even more time. If I set enable_sort=off then the query takes a lot of time and I cancel it. > > Please see the attached query plans. It's a little hard to comment here as I don't see what the plan was before when you were happy with the performance. I also see the queries you mentioned in the initial email don't match the plans. There's no table called "isbns" in the query. I guess this is "asins"? Likely you could get a faster plan if there was an index on asins_statistics (asin_id). That would allow a query plan that scans the isbns_value_key index and performs a parameterised nested loop on asins_statistics using the asins_statistics (asin_id) index. Looking at your schema, I don't see that index, so it's pretty hard to guess why the plan used to be faster. Even if the books/asins merge join used to take place first, there'd have been no efficient way to join to the asins_statistics table and preserve the Merge Join's order (I'm assuming non-parameterized nested loops would be inefficient in this case). Doing that would have also required the asins_statistics (asin_id) index. Are you sure that index wasn't dropped? However, likely it's a waste of time to try to figure out what the plan used to be. Better to focus on trying to make it faster. I suggest you create the asins_statistics (asin_id) index. However, I can't say with any level of confidence that the planner would opt to use that index if it did exist. Lowering random_page_cost or increasing effective_cache_size would increase the chances of that. David