On Fri, Aug 19, 2011 at 2:37 PM, Edoardo Panfili <edoardo@xxxxxxxx> wrote: > > work_mem = 1MB > random_page_cost = 4 > > I am using an SSD but the production system uses a standard hard disk. > > I did a try also with > set default_statistics_target=10000; > vacuum analyze cartellino; > vacuum analyze specie; -- the base table for specienomi > vacuum analyze confini_regioni; > > but is always 4617.023 ms OK, try turning up work_mem for just this connection, i.e.: psql mydb set work_mem='64MB'; explain analyze select .... ; and see if you get a different plan. Often you only need a slightly higher work_mem to get a better plan. We're looking for a hash_join to occur here, which should be much much faster. After testing you can set work_mem globally in the postgresql.conf file. Try to keep it smallish, as it's per sort per connection, so usage can go up really fast with a lot of active connections and swamp your server's memory. I run a 128G memory machine with ~500 connections and have it set to 16MB. -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general