Re: Debugging shared memory issues on CentOS

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Mack Talcott <mack.talcott@xxxxxxxxx> writes:
> The pattern I am seeing is that postgres processes keep growing in
> shared (this makes sense as they access more of the shared memory, as
> you've pointed out) but also process-specific memory as they run more
> queries.  The largest ones are using around 300mb of process-specific
> memory, even when they're idle and outside of any transactions.

There's quite a lot of stuff that a PG process will cache in local memory
once it's acquired the info, for example:
- relcache (relation descriptors)
- catcache (system catalog entries)
- compiled trees for plpgsql functions

300mb worth of that stuff seems on the high side, but perhaps you have
lots and lots of tables, or lots and lots of functions?

If this is the explanation then flushing that info just results in
restarting from a cold-cache situation, which doesn't seem likely to
be a win.  You're just going to be paying to read it in again.

> As for CentOS using 1.5g for disk caching, I'm at a loss.  I have
> played with the 'swappiness', setting it down to 10 from the default
> of 60 with sysctl.  It didn't have any effect.

Swappiness has nothing to do with disk cache.  Disk cache just means that
the kernel is free to use any spare memory for copies of file pages it's
read from disk lately.  This is almost always a good thing, because it
saves reading those pages again if they're needed again.  And the key word
there is "spare" --- the kernel is at liberty to drop those cached pages
if it needs the memory for something more pressing.  So there's really no
downside.  Trying to reduce that number is completely counterproductive.
Rather, my observation was that if you had a gig and a half worth of RAM
that the kernel felt it could afford to use for disk caching, then you
weren't having much of a memory problem.  However, apparently that
snapshot wasn't representative of your problem case:

> Once 70-80% of memory is reached, the machine starts using swap, and
> it keeps growing.  At first, queries become slightly slower.  Then
> some larger selects start taking 10, then 20, then 30 seconds.  During
> this, vmstat shows 5-20 procs waiting on both CPU and I/O.

I wonder if the short answer for this isn't that you should be using fewer
backends by running a connection pooler.  If the backends want to cache a
couple hundred meg worth of stuff, it's probably wise to let them do so.
Or maybe you should just buy some more RAM.  8GB is pretty puny for a
server these days (heck, the obsolete laptop I'm typing this mail on
has half that much).

			regards, tom lane


-- 
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance




[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux