> Le 16/03/2010 15:25, Richard Huxton a écrit : >> OK - we have a merge join in the first case where it joins the >> pre-sorted output of both tables. >> >> In the second case it queries the index once for each row in "cellules". >> >> Now look at the costs. The first one is around 704,000 and the second >> one is 5,000,000 - about 6 times as much. That's why it's not using the >> index, because it thinks it will be more expensive. >> >> If it's not really more expensive that suggests your configuration >> values aren't very close to reality. >> >> The first query should run faster if it has more work_mem available too. >> At the moment, it's probably going back and fore doing an on-disk sort. > > Indeed ! > I admit that I had not tested the second query, I just thought that the > first one took way too long to execute. > > I will try increasing work_mem, but it is already set at 16MB which I > found is quite high. What do you mean by "high"? I believe the proper value of work_mem is such that results in highest performance of the query while not causing problems to the other sessions. Don't forget you can set this for each session separately (SET work_mem = something) so you can use some conservative value in the postgresql.conf and use a different value (e.g. 64MB) for batch processing. I guess you don't perform such UPDATE in usual sessions, right? regards Tomas -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general