Do psql calls/procedures access resources reserved from the kernel.shmmax? How about the tar or copy sysadmin commands? I would guess they don't use kernel.shmmax resources. Finally, work memory alos does not access resources reserved from kernel.shmmax, correct? Thanks for clearing things up. -----Original Message----- From: Scott Marlowe [mailto:smarlowe@xxxxxxxxxxxxxxxxx] Sent: Thursday, May 12, 2005 11:21 AM To: Kavan, Dan (IMS) Cc: postgres Subject: RE: [ADMIN] memory allocation ; postgresql-8.0 On Thu, 2005-05-12 at 10:10, Kavan, Dan (IMS) wrote: > Hi Scott, > > Thanks again for all your tips. > > If I knock the buffer size down to 65,536 (still higher than what you > are recommending) then my shmmax becomes: 256,000 + 550,292,685 > (65536*8396.8) + 1,454,100 = 552,002,785 > > That will leave me with 3.5 GB of free memory for the system & work > memory to use. Will those free system resources ever get used with a > 10 million record, 10 GB database? Certainly. As you access the data the kernel will cache all the data sent through it. Once the machine's been up and processing for a while you should see a top output that shows "free" memory at a few megs (8 to 30 meg is typical) and all the rest of the memory being used as kernel cache. > If I go with 65,536 as my buffer size, Would having the SHMMAX set to > 1 GB on my sysctl.conf system parameters allow me to run two seperate > instances of postgresql on 2 seperate ports? Yes, but you may want to set it just a tad higher for things like fsm and whatnot. Definitely benchmark both the 64k setting of shared_buffers and lower settings, looking for a knee with your data set. It may well be that the best performance happens at a lower number, and doesn't really increase as you bump up the shared_buffers. Be sure to test things as realistically as possible, i.e. the right amount of parallel users and all that.