Good day,
I'm trying to set up a chef recipe to reserve enough HugePages on a linux system for our PG servers. A given VM will only host one PG cluster and that will be the only thing on that host that uses HugePages. Blogs that I've seen suggest that it would be as simple as taking the shared_buffers setting and dividing that by 2MB (huge page size), however I found that I needed some more.
In my test case, shared_buffers is set to 4003MB (calculated by chef) but PG failed to start until I reserved a few hundred more MB. When I checked VmPeak, it was 4321MB, so I ended up having to reserve over 2161 huge pages, over a hundred more than I had originally thought.
I'm told other factors contribute to this additional memory requirement, such as max_connections, wal_buffers, etc. I'm wondering if anyone has been able to come up with a reliable method for determining the HugePages requirements for a PG cluster based on the GUC values (that would be known at deployment time).
Thanks,