On 7.1.2010 15:23, Lefteris wrote: > I think what you all said was very helpful and clear! The only part > that I still disagree/don't understand is the shared_buffer option:)) Did you ever try increasing shared_buffers to what was suggested (around 4 GB) and see what happens (I didn't see it in your posts)? Shared_buffers can be thought as the PostgreSQLs internal cache. If the pages being scanned for a particular query are in the cache, this will help performance very much on multiple exequtions of the same query. OTOH, since the file system's cache didn't help you significantly, there is low possibility shared_buffers will. It is still worth trying. >From the description of the data ("...from years 1988 to 2009...") it looks like the query for "between 2000 and 2009" pulls out about half of the data. If an index could be used instead of seqscan, it could be perhaps only 50% faster, which is still not very comparable to others. The table is very wide, which is probably why the tested databases can deal with it faster than PG. You could try and narrow the table down (for instance: remove the Div* fields) to make the data more "relational-like". In real life, speedups in this circumstances would probably be gained by normalizing the data to make the basic table smaller and easier to use with indexing. -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance