Re: High CPU load on Postgres Server during Peak times!!!!

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Sep 23, 2009 at 12:25 PM, Shiva Raman <raman.shivag@xxxxxxxxx> wrote:

First let me say that upgrading to a later version is likely going to
help as much as anything else you're likely to pick up from this
discussion.  Not that this discussion isn't worthwhile, it is.

> If you run a 'ps ax|grep post' do you see anything that says 'idle in
> transaction'? (I hope that old of version will show it. my processes show up
> as postgres not postmaster)
>
> Lots of requests shows as 'idle in transaction'.
>
> Currently i am restarting the database using a cron job every 30 minutes
> during offpeak time
>
> and every 15 minutes during the peak time.

Wow.  It'd be way better if you could fix your application /
connection layer to not do that.

> As per the concurrency of 300 to 400 users, the following parameters are
> changed in
>
> postgresql conf based on the calculation provided in the postgresql
> documentation.
>
> Max connections = 1800 ( Too much open connections will result in unwanted
> memory wastage)

This is very high.  If you only need 400 users, you might want to
consider setting this to 500 or so.

> Shared Buffers = 375 000 ( 375000 * 8 * 1024 /100 = 3072 MB ) # proposed
> value is 1/4 the actual memory

Reasonable, but don't just blindly use 1/4 memory.  For transactional
loads smaller is often better.  For reporting dbs, larger is often
better.  Test it to see what happens with your load and varying
amounts of shared_buffers

> Effective Cache Size = 266000 ( 266000 * 8 * 1024 /100 = 2179 MB ) #
> proposed value is 1/3 memory after OS Allocation

Better to add the cache / buffer amount of OS and shared_buffers to
get it.  Which would be much higher.   Generally it's in the 3/4 of
memory on most machines.

> work_mem = 3000 ( 3000 * max connections * 1024 = 3000 * 1800 * 1024 = 5529
> MB ( this is the working memory for postgres) )

This is the max work_mem per sort or hash aggregate.  Note that if all
of your maximum backends connected and each did 2 sorts and one hash
aggregate at once, you could use max_connections * 3 * work_mem memory
at once.  Machine swaps til it dies.

Assuming this is 3000 8k blocks that 24Meg which is high but not unreasonable.


> max_fsm_pages = 20000 ( This has to be analyzed and can be increased to
> 40000, this can be done after one or two day observation)

To see what you need here, log into the postgres database as a
superuser and issue the command:

vacuum verbose;

and see what the last 5 or so lines have to say.  They'll look like this:

INFO:  free space map contains 339187 pages in 18145 relations
DETAIL:  A total of 623920 page slots are in use (including overhead).
623920 page slots are required to track all free space.
Current limits are:  10000000 page slots, 500000 relations, using 109582 kB.

-- 
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux