> Anyway, the original writer didn't specify an architechure. If it is a > 32bit one it is entirly possible that the memory map simply has no > large contiguous space to map the shared memory. it's 32bit. The actual problem of giving more buffers to postgresql was solved with the help of the following post: http://docs.freebsd.org/cgi/getmsg.cgi?fetch=83003+0+archive/2002/freebsd-hackers/20020804.freebsd-hackers It looks like despite to the comment in /usr/src/sys/i386/include/vmparam.h #ifndef MAXDSIZ #define MAXDSIZ (512UL*1024*1024) /* max data size */ #endif for FreeBSD MAXDSIZ actually tells kernel where to start allocating memory, but not the maximum allowable size. Cause as soon as I lowered this value from 2500UL*1024*1024(what I set when I was setting up the server) to 1024UL*1024*1025, I was able to further increase shared buffers in postgres.conf. Also, while I can agree with the point that "maybe OS file caching algorythm is more efficient than PostgreSQL's", but that still doest give us single meaning answer because: 1) for PostgreSQL the job of fetching the data from OS buffers should imply some overhead compared to accessing the data cached in shared buffers. 2) there is no guarantee that OS dedicates all the rest of available RAM for file caching. In fact, in case there are other processes running on the server, perhaps I want to make sure that that much memory is dedicated solely for PostgreSQL data caching, and the only way for that is increasing shared buffers. later today I will do some performance testing with shared buffers set to 50k as Tom suggested and then with, lets say 200k and post the results here. -- Vlad ---------------------------(end of broadcast)--------------------------- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq