Thank you for advice.
1)
First off all, we use java app with jdbc driver wich can pool connection, thats why i don't think that this is good decision to put one more pooler between app and DB. May be someone have an experience with pgbouncer and jdbc and could give a good advice with advantage and disadvantage of this architecture.1)
3)
4)
5)
I will be planing downtime and decrease max_connection and shared_buffers.
vm.dirty_bytes=67108864 this value equal my Smart Array E200i Cache Size.
<Basically don't TRY to allocate all the memory, try to leave 75% or so
<free for the OS to allocate as buffers. After getting a baseline for
<performance under load then make bigger changes
vm.dirty_background_bytes = 16777216 - 1/4 from vm.dirty_bytes
<Basically don't TRY to allocate all the memory, try to leave 75% or so
<free for the OS to allocate as buffers. After getting a baseline for
<performance under load then make bigger changes
This means that i should set effective_cache_size to 75% of my RAM?
2013/11/6 Scott Marlowe <scott.marlowe@xxxxxxxxx>
On Sat, Nov 2, 2013 at 12:54 PM, Евгений Селявка <evg.selyavka@xxxxxxxxx> wrote:
SNIP
> max_connections = 350
SNIP
> work_mem = 256MB
These two settings together are quite dangerous.
1: Look into a db pooler to get your connections needed down to no
more than 2x # of cores in your machine. I recommend pgbouncer
2: Your current settings mean that if you max out connections and each
of those connections does a large sort at the same time, they'll try
to allocated 256MB*250 or 89,600MB. If you run one job that can use
that much work_mem, then set it by connection or user for that one
connection or user only. Allowing any process to allocate 256MB is
usually a bad idea, and doubly so if you allow 350 incoming
connections. Dropping work_mem to 16MB means a ceiling of about 5G
memory if you get swamped and each query is averaging 1 sort. Note
that a single query CAN run > 1 sort, so it's not a hard limit and you
could still swamp your machine, but it's less likely.
3: Turn off the OOM killer. On a production DB it is unwelcome and
could cause some serious issues.
4: vm.swappiness = 0 is my normal setting. I also tend to just turn
off swap on big memory linux boxes because linux cirtual memory is
often counterproductive on db servers. Some people might even say it
is broken, I tend to agree. Better to have a process fail to allocate
memory and report it in logs than have a machine slow to a crawl under
load. But that's your choice. And 64G isn't that big, so you're in the
in between zone for me on whether to just turn off swap.
5: turn down shared_buffers to 1 or 2G.
6: lower all your vm dirty ratio / size settings so that the machine
never has to write a lot at one time.
Basically don't TRY to allocate all the memory, try to leave 75% or so
free for the OS to allocate as buffers. After getting a baseline for
performance under load then make bigger changes.
--
С уважением Селявка Евгений