Re: Is this way of testing a bad idea?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



"Fredrik Israelsson" <fredrik.israelsson@xxxxxxxxxxxxxx> writes:
> Monitoring the processes using top reveals that the total amount of
> memory used slowly increases during the test. When reaching insert
> number 40000, or somewhere around that, memory is exhausted, and the the
> systems begins to swap. Each of the postmaster processes seem to use a
> constant amount of memory, but the total memory usage increases all the
> same.

That statement is basically nonsense.   If there is a memory leak then
you should be able to pin it on some specific process.

What's your test case exactly, and what's your basis for asserting that
the system starts to swap?  We've seen people fooled by the fact that
some versions of ps report a process's total memory size as including
whatever pages of Postgres' shared memory area the process has actually
chanced to touch.  So as a backend randomly happens to use different
shared buffers its reported memory size grows ... but there's no actual
leak, and no reason why the system would start to swap.  (Unless maybe
you've set an unreasonably high shared_buffers setting?)

Another theory is that you're watching free memory go to zero because
the kernel is filling free memory with copies of disk pages.  This is
not a leak either.  Zero free memory is the normal, expected state of
a Unix system that's been up for any length of time.

			regards, tom lane


[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux