On 11/14/2012 01:56 PM, Wu Ming wrote:
This is interesting. About the "virtual size of one of the process", which process I should look up? Is the one who has the biggest virtual size? Thinking about this some more, I haven't checked to see if Windows adds dirtied shared_buffers to the process's working set. If so, you'd still be multiply counting shared memory. In that case, since you can't use an approach like Depesz writes about here for Linux: http://www.depesz.com/2012/06/09/how-much-ram-is-postgresql-using/ then it's going to be pretty hard to actually work it out. pg_buffercache might be of some help ( http://www.postgresql.org/docs/current/static/pgbuffercache.html) but it's not exactly friendly. Yes, it's absurd that it's so hard to work out how much memory Pg uses. It'd be nice if Pg provided better tools for this by allowing the postmaster to interrogate backends' memory contexts, though that'd only report how much memory Pg thought it was using, not how much memory it was actually using from the OS. Really, OS-specific tools are required, and nobody's written them - at least, I'm not aware of any that've been published. Most of the problem is that operating systems make it so hard to tell where memory is going when shared memory is involved. *if* Windows XP doesn't add dirtied shared buffers to the working set, then that would be a reasonable approximation.http://i45.tinypic.com/vr4t3b.png For example, from the above screenshot, the biggest virtual size from all postgresql process is 740004. So can we said the total approximate of memory usage of the postgresql service is 740004 K + total_of_working_sets (4844 K + 10056 K + 5408 K + 6020 K + ...) ? If it does, then it'd be massively out because it'd be double-counting shared memory. Off the top of my head I'm not sure how best to test this. Maybe if you do a simple query like `SELECT * FROM some_big_table` in `psql` and dump the result to the null device (\o NUL in windows if I recall correctly, but again not tested) or a temp file and see how much the backend grows. If it grows more than a few hundred K then I expect it's probably having the dirtied shared_buffers counted against it. And, of course, because PostgreSQL looks like it uses a TON of memory, even when it's really using only a small amount.Maybe you're right. If I close one of the memory porks, it gets a bit better. Maybe I was too quick to blame postgreSQL, it's just that I can't close and restart other applications because they are either too important or slow to reload, where postgresql service is very quick in restarting. I hope it'll understand. This has been an ongoing source of confusion, but it's one that isn't going to go away until OSes offer a way to easily ask "how much RAM is this group of processes using in total". -- Craig Ringer http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services |