Hey all, This may be more of a Linux question than a PG question, but
I’m wondering if any of you have successfully allocated more than 8 GB of
memory to PG before. I have a fairly robust server running Ubuntu Hardy Heron, 24
GB of memory, and I’ve tried to commit half the memory to PG’s
shared buffer, but it seems to fail. I’m setting the kernel shared
memory accordingly using sysctl, which seems to work fine, but when I set the
shared buffer in PG and restart the service, it fails if it’s above about
8 GB. I actually have it currently set at 6 GB. I don’t have the exact failure message handy, but I
can certainly get it if that helps. Mostly I’m just looking to know
if there’s any general reason why it would fail, some inherent kernel or
db limitation that I’m unaware of. If it matters, this DB is going to be hosting and processing
hundreds of GB and eventually TB of data, it’s a heavy read-write system,
not transactional processing, just a lot of data file parsing (python/bash) and
bulk loading. Obviously the disks get hit pretty hard already, so I want
to make the most of the large amount of available memory wherever possible.
So I’m trying to tune in that direction. Any info is appreciated. Thanks! |