It is amazing, how after working with databases very actively for over 8 years, I am still learning things. Thanks, Lance Campbell Project Manager/Software Architect Web Services at Public Affairs University of Illinois 217.333.0382 http://webservices.uiuc.edu -----Original Message----- From: Scott Marlowe [mailto:scott.marlowe@xxxxxxxxx] Sent: Friday, November 09, 2007 1:13 PM To: Campbell, Lance Cc: Heikki Linnakangas; pgsql-performance@xxxxxxxxxxxxxx Subject: Re: work_mem and shared_buffers On Nov 9, 2007 12:08 PM, Campbell, Lance <lance@xxxxxxxx> wrote: > How do you know when you should up the value of work_mem? Just play > with the number. Is there a query I could do that would tell me if > PostgreSql is performing SQL that could use more memory for sorting? Trial and error. Note that you can set work_mem for a given session. While it may seem that making work_mem bigger will always help, that's not necessarily the case. Using this query: select count(*) from (select * from myreporttable where lasttime > now() - interval '1 week' order by random() ) as l I did the following: (I ran the query by itself once to fill the buffers / cache of the machine with the data) work_mem Time: 1000kB 29215.563 ms 4000kB 20612.489 ms 8000kB 18408.087 ms 16000kB 16893.964 ms 32000kB 17681.221 ms 64000kB 22439.988 ms 125MB 23398.891 ms 250MB 25461.797 ms Note that my best time was at around 16 Meg work_mem. This data set is MUCH bigger than 16 Meg, it's around 300-400 Meg. But work_mem optimized out at 16 Meg. Btw, I tried it going as high as 768 Meg, and it was still slower than 16M. This machine has 2 Gigs ram and is optimized for IO not CPU performance. ---------------------------(end of broadcast)--------------------------- TIP 2: Don't 'kill -9' the postmaster