Tom Lane wrote: > "Ryan Hansen" <ryan.hansen@xxxxxxxxxxxxxxxxxx> writes: [...] >> but when I set the shared buffer in PG and restart >> the service, it fails if it's above about 8 GB. > > Fails how? And what PG version is that? The thread seems to end here as far as the specific question was concerned. I just ran into the same issue though, also on Ubuntu Hardy with PG 8.2.7 - if I set shared buffers to 8 GB, starting the server fails with 2009-01-06 17:15:09.367 PST 6804 DETAIL: Failed system call was shmget(key=5432001, size=8810725376, 03600). then I take the request size value from the error and do echo 8810725376 > /proc/sys/kernel/shmmax and get the same error again. If I try that with shared_buffers = 7 GB (setting shmmax to 7706542080), it works. Even if I double the value for 8 GB and set shmmax to 17621450752, I get the same error. There seems to be a ceiling. Earlier in this thread somebody mentioned they had set shared buffers to 24 GB on CentOS, so it seems to be a platform issue. I also tried to double SHMMNI, from 4096 -> 8192, as the PG error suggests, but to no avail. This is a new 16-core Dell box with 64 GB of RAM and a mid-range controller with 8 spindles in RAID 0+1, one big filesystem. The database is currently 55 GB in size with a web application type OLTP load, doing ~6000 tps at peak time (and growing fast). The problem surfaced here because we just upgraded from an 8-core server with 16 GB RAM with very disappointing results initially. The new server would go inexplicably slow near peak time, with context switches ~100k and locks going ballistic. It seemed worse than on the smaller machine. Until we revised the configuration which I'd just copied over from the old box, and adjusted shared_buffers from 2 GB -> 4 GB. Now it seem to perform well. I found that surprising given that 2 GB is quite a lot already and since I'd gathered that the benefits of cranking up shared buffers are not scientifically proven, or that often if not most of the time the OS's caching mechanisms are adequate or even superior to what you might achieve by fiddling with the PG configuration and setting shared buffers very high. Regards, Frank -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance