Re: How to reduce writing on disk ? (90 gb on pgsql_tmp)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 06/04/15 02:58, Scott Marlowe wrote:
On Wed, Jun 3, 2015 at 6:53 PM, Scott Marlowe <scott.marlowe@xxxxxxxxx> wrote:
On Wed, Jun 3, 2015 at 4:29 PM, Joshua D. Drake <jd@xxxxxxxxxxxxxxxxx> wrote:

On 06/03/2015 03:16 PM, Tomas Vondra wrote:

Cache is not free memory - it's there for a purpose and usually
plays a significant role in performance. Sure, it may be freed
and used for other purposes, but that has consequences - e.g.
it impacts performance of other queries etc. You generally
don't want to do that onproduction.


Exactly. If your cache is reduced your performance is reduced
because less things are in cache. It is not free memory. Also the
command "free" is not useful in this scenario. It is almost
alwaysbetter to use sar so you can see where the data points are
thatfree is using.

But if that WAS happening he wouldn't still HAVE 60G of cache!
That's my whole point. He's NOT running out of memory. He's not
even having to dump cache right now.

No one claimed he's running out of memory ...

What I claimed is that considering page cache equal to free memory is not really appropriate, because it is used for caching data, which plays a significant role.

Regarding the "free" output, we have no clue when the "free" command was executed. I might have been executed while the query was running, right after it failed or long after that. That has significant impact on interpretation of the output.

Also, we have no clue what happens on the machine, so it's possible there are other queries competing for the page cache, quickly filling reusing free memory (used for large query moments ago) for page cache.

And finally, we have no clue what plan the query is using, so we don't know how much memory it's using before it starts spilling to disk. For example it might easily be a single sort node, taking only 384MB (the work_mem) of RAM before it starts spilling to disk.


Further if he started using a few gig here for this one it wouldn't
have a big impact on cache (60G-1G etc) but might make it much
faster, as spilling to disk is a lot less intrusive when you've got a
bigger chunk of ram to work in. OTOH doing something like setting
work_mem to 60G would likely be fatal.

It'd certainly be fatal, because this query is spilling >95G to disk, and keeping that in memory would easily require 2-3x more space.


But he's not down to 3GB of memory by any kind of imagination. Any
working machine will slowly, certainly fill its caches since it's
not using the memory for anything else. That's normal. As long as
you're not blowing out the cache you're fine.

Once again, what about the 15GB shared buffers? Not that it'd change anything, really.


--
Tomas Vondra                  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


--
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance




[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux