Search Postgresql Archives

Re: Calculating vm.nr_hugepages

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Aug 30, 2023 at 8:12 AM Troels Arvin <troels@xxxxxxxx> wrote:
Hello,

I'm writing an Ansible play which is to set the correct value for
vm.nr_hugepages on Linux servers where I hope to make Postgres make use
of huge pages.

However, I'm struggling to find the right formula.

I assume I need to find the same value as I get from running "postgres
-C shared_memory_size_in_huge_pages". I call that my target value.
Note: I cannot simply run "postgres -C ...", because I need my Ansible
play to work against a server where Postgres is running.

I've tried using the formula described at
https://www.cybertec-postgresql.com/en/huge-pages-postgresql/, but it
produces a different value than my target:

Using a shared_buffers value of 21965570048, like in Cybertec
Postgresql's example:
"postgres ... -C 21965570048B" yields: 10719
The formula from Cybertec Postgresql says: 10475

I've also tried doing what ChatGPG suggested:
Number of Huge Pages when shared_buffers is set to 1 GiB =
shared_buffers / huge_page_size
                     = 1073741824 bytes / 2097152 bytes
                     = 512
But that's also wrong compared to "postgres -C ..." (which said 542).

Which formula can I use? It's OK for me for it to be slightly wrong
compared to "postgres -C", but if it's wrong, it needs to be slightly
higher than what "postgres -C" outputs, so that I'm sure there's enough
huge pages for Postgres to be able to use them properly.

Good morning Troels,

I had a similar thread a couple of years ago, you may want to read:

https://www.postgresql.org/message-id/flat/CAHJZqBBLHFNs6it-fcJ6LEUXeC5t73soR3h50zUSFpg7894qfQ%40mail.gmail.com

In it, Justin Przby provides the detailed code for exactly what factors into HugePages, if you require that level of precision.

I hadn't seen that Cybertec blog post before. I ended up using my own equation that I derived in that thread after Justin shared his info. The chef/ruby code involved is:

padding = 100
if shared_buffers_size > 40000
  padding = 500
end
shared_buffers_usage = shared_buffers_size + 200 + (25 * shared_buffers_size / 1024)
max_connections_usage = (max_connections - 100) / 20
wal_buffers_usage = (wal_buffers_size - 16) / 2
vm.nr_hugepages = ((shared_buffers_usage + max_connections_usage + wal_buffers_usage + padding) / 2).ceil()


wal_buffers_size is usually 16MB so wal_buffers_usage ends up being zeroed out. This has worked out for our various postgres VM sizes. There's obviously going to be a little extra HugePages that goes unused, but these VMs are dedicated for postgresql usage and shared_buffers_size defaults to 25% of VM memory so there's still plenty to spare. But we use this so we can configure vm.nr_hugepages at deployment time via Chef.

Don.

--
Don Seiler
www.seiler.us

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]

  Powered by Linux