Re: postgresql recommendation memory

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Nov 6, 2013 at 1:53 AM, Евгений Селявка <evg.selyavka@xxxxxxxxx> wrote:
> Thank you for advice.
>
> 1)
> First off all, we use java app with jdbc driver wich can pool connection,
> thats why i don't think that this is good decision to put one more pooler
> between app and DB. May be someone have an experience with  pgbouncer and
> jdbc and could give a good advice with advantage and disadvantage of this
> architecture.

That's a mostly religious argument. I.e. you're going on feeling here
that pooling in jdbc alone is better than either jdbc/pgbouncer or
plain pgbouncer alone. My experience is that jdbc pooling is not in
the same category as pgbouncer for configuration and performance.
Either way, get that connection count down to something reasonable.

If you've routinely got 200+ connections you've got too many. Again,
2x cores is about max on most machines for maximum throughput. 3x to
4x is the absolute max really. Unless you've got a machine with 40+
cores, you should be running a lot fewer connections.

Basically pgbouncer is veyr lightweight and can take thousands of
incoming connections and balance them into a few dozen connections to
the database.

> 2) Yes this is my error in configuration and every day or two i decrease
> work_mem and monitor for my system and postgresql log try to find record
> about temp files. I will decrease work_mem to 16MB or may be 32MB. But
> really i repeat that i have about 100 concurrent connections to my DB. I set
> this value with big reserve. I can't change this setting because db in
> production.

If you've got one job that needs lots of mem and lot of jobs that
don't, look at my recommendation to lower work_mem for all the low mem
requiring jobs. If you can split those heavy lifting jobs out to
another user, then you can use a pooler like pgbouncer to do admission
control by limiting that heavy lifter to a few connections at a time.
The rest will wait in line behind it.

> 3)
> I also read about disabling OOM killer but when i set
> vm.overcommit_memory=2. My DB work couple of a day and then pacemaker stop
> it because i set wrong value for vm.overcommit_ratio i set it to 90. And
> when pacemaker try to execute psql -c 'select 1' postmaster return 'out of
> memory' and pacemaker stop my production DB. I need to know what is the
> correct value for vm.overcommit_ratio or how postmaster allocate memory when
> fork may be formula or something? If i get answers on this question i can
> pick up vm.overcommit_ratio.

You are definitely running your server out of memory then. Can you
throw say 256G into it? It's usually worth every penny to throw memory
at the problem. Reducing usage will help a lot for now tho.

> 4)
> About vm.swappiness totally agree and i turn it on for experiment goals,
> because i have problem and my db freeze. I play with different kernel
> setting try to pick up correct value. In the beginning  i set it to 0 and
> all works fine.

The linux kernel is crazy about swapping. even with swappinness set to
0, it'll swap stuff out once it's gotten old. suddenly shared_buffers
are on disk not in ram etc. On big memory machines (we use 128G to 1TB
memory at work) I just turn it off because the bigger the memory the
dumber it seems to get.

> 6) I set to this values
> vm.dirty_bytes=67108864 this value equal my Smart Array E200i Cache Size.
> vm.dirty_background_bytes = 16777216 - 1/4 from vm.dirty_bytes

Ahh I usually set the ratio for dirty_ratio to something small like 5
to 10 and dirty_background_ratio to 1. The less bursty the dirty
background stuff is the better in db land.
Your numbers are fairly low assuming dirty bytes in in bytes and not
kilobytes or something. :) I never use it do I'm not sure one way or
the other.

> <Basically don't TRY to allocate all the memory, try to leave 75% or so
> <free for the OS to allocate as buffers. After getting a baseline for
> <performance under load then make bigger changes
>
> This means that i should set effective_cache_size to 75% of my RAM?

That is a reasonable number, especially once you get the machine to
stop using so much memory for sorts and shared_buffers. The idea is
that when you look at free, after the db's been up for a day or two,
you should see 75% or so of your RAM allocated to cache / buffers.

Good luck, keep us informed on your progress.


-- 
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance





[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux